This is gsl-ref.info, produced by makeinfo version 4.6 from gsl-ref.texi. INFO-DIR-SECTION Scientific software START-INFO-DIR-ENTRY * gsl-ref: (gsl-ref). GNU Scientific Library - Reference END-INFO-DIR-ENTRY Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006 The GSL Team. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with the Invariant Sections being "GNU General Public License" and "Free Software Needs Free Documentation", the Front-Cover text being "A GNU Manual", and with the Back-Cover Text being (a) (see below). A copy of the license is included in the section entitled "GNU Free Documentation License". (a) The Back-Cover Text is: "You have freedom to copy and modify this GNU Manual, like GNU software."  File: gsl-ref.info, Node: Level 1 GSL BLAS Interface, Next: Level 2 GSL BLAS Interface, Up: GSL BLAS Interface Level 1 ------- - Function: int gsl_blas_sdsdot (float ALPHA, const gsl_vector_float * X, const gsl_vector_float * Y, float * RESULT) This function computes the sum \alpha + x^T y for the vectors X and Y, returning the result in RESULT. - Function: int gsl_blas_sdot (const gsl_vector_float * X, const gsl_vector_float * Y, float * RESULT) - Function: int gsl_blas_dsdot (const gsl_vector_float * X, const gsl_vector_float * Y, double * RESULT) - Function: int gsl_blas_ddot (const gsl_vector * X, const gsl_vector * Y, double * RESULT) These functions compute the scalar product x^T y for the vectors X and Y, returning the result in RESULT. - Function: int gsl_blas_cdotu (const gsl_vector_complex_float * X, const gsl_vector_complex_float * Y, gsl_complex_float * DOTU) - Function: int gsl_blas_zdotu (const gsl_vector_complex * X, const gsl_vector_complex * Y, gsl_complex * DOTU) These functions compute the complex scalar product x^T y for the vectors X and Y, returning the result in RESULT - Function: int gsl_blas_cdotc (const gsl_vector_complex_float * X, const gsl_vector_complex_float * Y, gsl_complex_float * DOTC) - Function: int gsl_blas_zdotc (const gsl_vector_complex * X, const gsl_vector_complex * Y, gsl_complex * DOTC) These functions compute the complex conjugate scalar product x^H y for the vectors X and Y, returning the result in RESULT - Function: float gsl_blas_snrm2 (const gsl_vector_float * X) - Function: double gsl_blas_dnrm2 (const gsl_vector * X) These functions compute the Euclidean norm ||x||_2 = \sqrt {\sum x_i^2} of the vector X. - Function: float gsl_blas_scnrm2 (const gsl_vector_complex_float * X) - Function: double gsl_blas_dznrm2 (const gsl_vector_complex * X) These functions compute the Euclidean norm of the complex vector X, ||x||_2 = \sqrt {\sum (\Re(x_i)^2 + \Im(x_i)^2)}. - Function: float gsl_blas_sasum (const gsl_vector_float * X) - Function: double gsl_blas_dasum (const gsl_vector * X) These functions compute the absolute sum \sum |x_i| of the elements of the vector X. - Function: float gsl_blas_scasum (const gsl_vector_complex_float * X) - Function: double gsl_blas_dzasum (const gsl_vector_complex * X) These functions compute the sum of the magnitudes of the real and imaginary parts of the complex vector X, \sum |\Re(x_i)| + |\Im(x_i)|. - Function: CBLAS_INDEX_t gsl_blas_isamax (const gsl_vector_float * X) - Function: CBLAS_INDEX_t gsl_blas_idamax (const gsl_vector * X) - Function: CBLAS_INDEX_t gsl_blas_icamax (const gsl_vector_complex_float * X) - Function: CBLAS_INDEX_t gsl_blas_izamax (const gsl_vector_complex * X) These functions return the index of the largest element of the vector X. The largest element is determined by its absolute magnitude for real vectors and by the sum of the magnitudes of the real and imaginary parts |\Re(x_i)| + |\Im(x_i)| for complex vectors. If the largest value occurs several times then the index of the first occurrence is returned. - Function: int gsl_blas_sswap (gsl_vector_float * X, gsl_vector_float * Y) - Function: int gsl_blas_dswap (gsl_vector * X, gsl_vector * Y) - Function: int gsl_blas_cswap (gsl_vector_complex_float * X, gsl_vector_complex_float * Y) - Function: int gsl_blas_zswap (gsl_vector_complex * X, gsl_vector_complex * Y) These functions exchange the elements of the vectors X and Y. - Function: int gsl_blas_scopy (const gsl_vector_float * X, gsl_vector_float * Y) - Function: int gsl_blas_dcopy (const gsl_vector * X, gsl_vector * Y) - Function: int gsl_blas_ccopy (const gsl_vector_complex_float * X, gsl_vector_complex_float * Y) - Function: int gsl_blas_zcopy (const gsl_vector_complex * X, gsl_vector_complex * Y) These functions copy the elements of the vector X into the vector Y. - Function: int gsl_blas_saxpy (float ALPHA, const gsl_vector_float * X, gsl_vector_float * Y) - Function: int gsl_blas_daxpy (double ALPHA, const gsl_vector * X, gsl_vector * Y) - Function: int gsl_blas_caxpy (const gsl_complex_float ALPHA, const gsl_vector_complex_float * X, gsl_vector_complex_float * Y) - Function: int gsl_blas_zaxpy (const gsl_complex ALPHA, const gsl_vector_complex * X, gsl_vector_complex * Y) These functions compute the sum y = \alpha x + y for the vectors X and Y. - Function: void gsl_blas_sscal (float ALPHA, gsl_vector_float * X) - Function: void gsl_blas_dscal (double ALPHA, gsl_vector * X) - Function: void gsl_blas_cscal (const gsl_complex_float ALPHA, gsl_vector_complex_float * X) - Function: void gsl_blas_zscal (const gsl_complex ALPHA, gsl_vector_complex * X) - Function: void gsl_blas_csscal (float ALPHA, gsl_vector_complex_float * X) - Function: void gsl_blas_zdscal (double ALPHA, gsl_vector_complex * X) These functions rescale the vector X by the multiplicative factor ALPHA. - Function: int gsl_blas_srotg (float A[], float B[], float C[], float S[]) - Function: int gsl_blas_drotg (double A[], double B[], double C[], double S[]) These functions compute a Givens rotation (c,s) which zeroes the vector (a,b), [ c s ] [ a ] = [ r ] [ -s c ] [ b ] [ 0 ] The variables A and B are overwritten by the routine. - Function: int gsl_blas_srot (gsl_vector_float * X, gsl_vector_float * Y, float C, float S) - Function: int gsl_blas_drot (gsl_vector * X, gsl_vector * Y, const double C, const double S) These functions apply a Givens rotation (x', y') = (c x + s y, -s x + c y) to the vectors X, Y. - Function: int gsl_blas_srotmg (float D1[], float D2[], float B1[], float B2, float P[]) - Function: int gsl_blas_drotmg (double D1[], double D2[], double B1[], double B2, double P[]) These functions compute a modified Givens transformation. The modified Givens transformation is defined in the original Level-1 BLAS specification, given in the references. - Function: int gsl_blas_srotm (gsl_vector_float * X, gsl_vector_float * Y, const float P[]) - Function: int gsl_blas_drotm (gsl_vector * X, gsl_vector * Y, const double P[]) These functions apply a modified Givens transformation.  File: gsl-ref.info, Node: Level 2 GSL BLAS Interface, Next: Level 3 GSL BLAS Interface, Prev: Level 1 GSL BLAS Interface, Up: GSL BLAS Interface Level 2 ------- - Function: int gsl_blas_sgemv (CBLAS_TRANSPOSE_t TRANSA, float ALPHA, const gsl_matrix_float * A, const gsl_vector_float * X, float BETA, gsl_vector_float * Y) - Function: int gsl_blas_dgemv (CBLAS_TRANSPOSE_t TRANSA, double ALPHA, const gsl_matrix * A, const gsl_vector * X, double BETA, gsl_vector * Y) - Function: int gsl_blas_cgemv (CBLAS_TRANSPOSE_t TRANSA, const gsl_complex_float ALPHA, const gsl_matrix_complex_float * A, const gsl_vector_complex_float * X, const gsl_complex_float BETA, gsl_vector_complex_float * Y) - Function: int gsl_blas_zgemv (CBLAS_TRANSPOSE_t TRANSA, const gsl_complex ALPHA, const gsl_matrix_complex * A, const gsl_vector_complex * X, const gsl_complex BETA, gsl_vector_complex * Y) These functions compute the matrix-vector product and sum y = \alpha op(A) x + \beta y, where op(A) = A, A^T, A^H for TRANSA = `CblasNoTrans', `CblasTrans', `CblasConjTrans'. - Function: int gsl_blas_strmv (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const gsl_matrix_float * A, gsl_vector_float * X) - Function: int gsl_blas_dtrmv (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const gsl_matrix * A, gsl_vector * X) - Function: int gsl_blas_ctrmv (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const gsl_matrix_complex_float * A, gsl_vector_complex_float * X) - Function: int gsl_blas_ztrmv (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const gsl_matrix_complex * A, gsl_vector_complex * X) These functions compute the matrix-vector product x =\alpha op(A) x for the triangular matrix A, where op(A) = A, A^T, A^H for TRANSA = `CblasNoTrans', `CblasTrans', `CblasConjTrans'. When UPLO is `CblasUpper' then the upper triangle of A is used, and when UPLO is `CblasLower' then the lower triangle of A is used. If DIAG is `CblasNonUnit' then the diagonal of the matrix is used, but if DIAG is `CblasUnit' then the diagonal elements of the matrix A are taken as unity and are not referenced. - Function: int gsl_blas_strsv (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const gsl_matrix_float * A, gsl_vector_float * X) - Function: int gsl_blas_dtrsv (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const gsl_matrix * A, gsl_vector * X) - Function: int gsl_blas_ctrsv (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const gsl_matrix_complex_float * A, gsl_vector_complex_float * X) - Function: int gsl_blas_ztrsv (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const gsl_matrix_complex * A, gsl_vector_complex * X) These functions compute inv(op(A)) x for X, where op(A) = A, A^T, A^H for TRANSA = `CblasNoTrans', `CblasTrans', `CblasConjTrans'. When UPLO is `CblasUpper' then the upper triangle of A is used, and when UPLO is `CblasLower' then the lower triangle of A is used. If DIAG is `CblasNonUnit' then the diagonal of the matrix is used, but if DIAG is `CblasUnit' then the diagonal elements of the matrix A are taken as unity and are not referenced. - Function: int gsl_blas_ssymv (CBLAS_UPLO_t UPLO, float ALPHA, const gsl_matrix_float * A, const gsl_vector_float * X, float BETA, gsl_vector_float * Y) - Function: int gsl_blas_dsymv (CBLAS_UPLO_t UPLO, double ALPHA, const gsl_matrix * A, const gsl_vector * X, double BETA, gsl_vector * Y) These functions compute the matrix-vector product and sum y = \alpha A x + \beta y for the symmetric matrix A. Since the matrix A is symmetric only its upper half or lower half need to be stored. When UPLO is `CblasUpper' then the upper triangle and diagonal of A are used, and when UPLO is `CblasLower' then the lower triangle and diagonal of A are used. - Function: int gsl_blas_chemv (CBLAS_UPLO_t UPLO, const gsl_complex_float ALPHA, const gsl_matrix_complex_float * A, const gsl_vector_complex_float * X, const gsl_complex_float BETA, gsl_vector_complex_float * Y) - Function: int gsl_blas_zhemv (CBLAS_UPLO_t UPLO, const gsl_complex ALPHA, const gsl_matrix_complex * A, const gsl_vector_complex * X, const gsl_complex BETA, gsl_vector_complex * Y) These functions compute the matrix-vector product and sum y = \alpha A x + \beta y for the hermitian matrix A. Since the matrix A is hermitian only its upper half or lower half need to be stored. When UPLO is `CblasUpper' then the upper triangle and diagonal of A are used, and when UPLO is `CblasLower' then the lower triangle and diagonal of A are used. The imaginary elements of the diagonal are automatically assumed to be zero and are not referenced. - Function: int gsl_blas_sger (float ALPHA, const gsl_vector_float * X, const gsl_vector_float * Y, gsl_matrix_float * A) - Function: int gsl_blas_dger (double ALPHA, const gsl_vector * X, const gsl_vector * Y, gsl_matrix * A) - Function: int gsl_blas_cgeru (const gsl_complex_float ALPHA, const gsl_vector_complex_float * X, const gsl_vector_complex_float * Y, gsl_matrix_complex_float * A) - Function: int gsl_blas_zgeru (const gsl_complex ALPHA, const gsl_vector_complex * X, const gsl_vector_complex * Y, gsl_matrix_complex * A) These functions compute the rank-1 update A = \alpha x y^T + A of the matrix A. - Function: int gsl_blas_cgerc (const gsl_complex_float ALPHA, const gsl_vector_complex_float * X, const gsl_vector_complex_float * Y, gsl_matrix_complex_float * A) - Function: int gsl_blas_zgerc (const gsl_complex ALPHA, const gsl_vector_complex * X, const gsl_vector_complex * Y, gsl_matrix_complex * A) These functions compute the conjugate rank-1 update A = \alpha x y^H + A of the matrix A. - Function: int gsl_blas_ssyr (CBLAS_UPLO_t UPLO, float ALPHA, const gsl_vector_float * X, gsl_matrix_float * A) - Function: int gsl_blas_dsyr (CBLAS_UPLO_t UPLO, double ALPHA, const gsl_vector * X, gsl_matrix * A) These functions compute the symmetric rank-1 update A = \alpha x x^T + A of the symmetric matrix A. Since the matrix A is symmetric only its upper half or lower half need to be stored. When UPLO is `CblasUpper' then the upper triangle and diagonal of A are used, and when UPLO is `CblasLower' then the lower triangle and diagonal of A are used. - Function: int gsl_blas_cher (CBLAS_UPLO_t UPLO, float ALPHA, const gsl_vector_complex_float * X, gsl_matrix_complex_float * A) - Function: int gsl_blas_zher (CBLAS_UPLO_t UPLO, double ALPHA, const gsl_vector_complex * X, gsl_matrix_complex * A) These functions compute the hermitian rank-1 update A = \alpha x x^H + A of the hermitian matrix A. Since the matrix A is hermitian only its upper half or lower half need to be stored. When UPLO is `CblasUpper' then the upper triangle and diagonal of A are used, and when UPLO is `CblasLower' then the lower triangle and diagonal of A are used. The imaginary elements of the diagonal are automatically set to zero. - Function: int gsl_blas_ssyr2 (CBLAS_UPLO_t UPLO, float ALPHA, const gsl_vector_float * X, const gsl_vector_float * Y, gsl_matrix_float * A) - Function: int gsl_blas_dsyr2 (CBLAS_UPLO_t UPLO, double ALPHA, const gsl_vector * X, const gsl_vector * Y, gsl_matrix * A) These functions compute the symmetric rank-2 update A = \alpha x y^T + \alpha y x^T + A of the symmetric matrix A. Since the matrix A is symmetric only its upper half or lower half need to be stored. When UPLO is `CblasUpper' then the upper triangle and diagonal of A are used, and when UPLO is `CblasLower' then the lower triangle and diagonal of A are used. - Function: int gsl_blas_cher2 (CBLAS_UPLO_t UPLO, const gsl_complex_float ALPHA, const gsl_vector_complex_float * X, const gsl_vector_complex_float * Y, gsl_matrix_complex_float * A) - Function: int gsl_blas_zher2 (CBLAS_UPLO_t UPLO, const gsl_complex ALPHA, const gsl_vector_complex * X, const gsl_vector_complex * Y, gsl_matrix_complex * A) These functions compute the hermitian rank-2 update A = \alpha x y^H + \alpha^* y x^H A of the hermitian matrix A. Since the matrix A is hermitian only its upper half or lower half need to be stored. When UPLO is `CblasUpper' then the upper triangle and diagonal of A are used, and when UPLO is `CblasLower' then the lower triangle and diagonal of A are used. The imaginary elements of the diagonal are automatically set to zero.  File: gsl-ref.info, Node: Level 3 GSL BLAS Interface, Prev: Level 2 GSL BLAS Interface, Up: GSL BLAS Interface Level 3 ------- - Function: int gsl_blas_sgemm (CBLAS_TRANSPOSE_t TRANSA, CBLAS_TRANSPOSE_t TRANSB, float ALPHA, const gsl_matrix_float * A, const gsl_matrix_float * B, float BETA, gsl_matrix_float * C) - Function: int gsl_blas_dgemm (CBLAS_TRANSPOSE_t TRANSA, CBLAS_TRANSPOSE_t TRANSB, double ALPHA, const gsl_matrix * A, const gsl_matrix * B, double BETA, gsl_matrix * C) - Function: int gsl_blas_cgemm (CBLAS_TRANSPOSE_t TRANSA, CBLAS_TRANSPOSE_t TRANSB, const gsl_complex_float ALPHA, const gsl_matrix_complex_float * A, const gsl_matrix_complex_float * B, const gsl_complex_float BETA, gsl_matrix_complex_float * C) - Function: int gsl_blas_zgemm (CBLAS_TRANSPOSE_t TRANSA, CBLAS_TRANSPOSE_t TRANSB, const gsl_complex ALPHA, const gsl_matrix_complex * A, const gsl_matrix_complex * B, const gsl_complex BETA, gsl_matrix_complex * C) These functions compute the matrix-matrix product and sum C = \alpha op(A) op(B) + \beta C where op(A) = A, A^T, A^H for TRANSA = `CblasNoTrans', `CblasTrans', `CblasConjTrans' and similarly for the parameter TRANSB. - Function: int gsl_blas_ssymm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, float ALPHA, const gsl_matrix_float * A, const gsl_matrix_float * B, float BETA, gsl_matrix_float * C) - Function: int gsl_blas_dsymm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, double ALPHA, const gsl_matrix * A, const gsl_matrix * B, double BETA, gsl_matrix * C) - Function: int gsl_blas_csymm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, const gsl_complex_float ALPHA, const gsl_matrix_complex_float * A, const gsl_matrix_complex_float * B, const gsl_complex_float BETA, gsl_matrix_complex_float * C) - Function: int gsl_blas_zsymm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, const gsl_complex ALPHA, const gsl_matrix_complex * A, const gsl_matrix_complex * B, const gsl_complex BETA, gsl_matrix_complex * C) These functions compute the matrix-matrix product and sum C = \alpha A B + \beta C for SIDE is `CblasLeft' and C = \alpha B A + \beta C for SIDE is `CblasRight', where the matrix A is symmetric. When UPLO is `CblasUpper' then the upper triangle and diagonal of A are used, and when UPLO is `CblasLower' then the lower triangle and diagonal of A are used. - Function: int gsl_blas_chemm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, const gsl_complex_float ALPHA, const gsl_matrix_complex_float * A, const gsl_matrix_complex_float * B, const gsl_complex_float BETA, gsl_matrix_complex_float * C) - Function: int gsl_blas_zhemm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, const gsl_complex ALPHA, const gsl_matrix_complex * A, const gsl_matrix_complex * B, const gsl_complex BETA, gsl_matrix_complex * C) These functions compute the matrix-matrix product and sum C = \alpha A B + \beta C for SIDE is `CblasLeft' and C = \alpha B A + \beta C for SIDE is `CblasRight', where the matrix A is hermitian. When UPLO is `CblasUpper' then the upper triangle and diagonal of A are used, and when UPLO is `CblasLower' then the lower triangle and diagonal of A are used. The imaginary elements of the diagonal are automatically set to zero. - Function: int gsl_blas_strmm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, float ALPHA, const gsl_matrix_float * A, gsl_matrix_float * B) - Function: int gsl_blas_dtrmm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, double ALPHA, const gsl_matrix * A, gsl_matrix * B) - Function: int gsl_blas_ctrmm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const gsl_complex_float ALPHA, const gsl_matrix_complex_float * A, gsl_matrix_complex_float * B) - Function: int gsl_blas_ztrmm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const gsl_complex ALPHA, const gsl_matrix_complex * A, gsl_matrix_complex * B) These functions compute the matrix-matrix product B = \alpha op(A) B for SIDE is `CblasLeft' and B = \alpha B op(A) for SIDE is `CblasRight'. The matrix A is triangular and op(A) = A, A^T, A^H for TRANSA = `CblasNoTrans', `CblasTrans', `CblasConjTrans' When UPLO is `CblasUpper' then the upper triangle of A is used, and when UPLO is `CblasLower' then the lower triangle of A is used. If DIAG is `CblasNonUnit' then the diagonal of A is used, but if DIAG is `CblasUnit' then the diagonal elements of the matrix A are taken as unity and are not referenced. - Function: int gsl_blas_strsm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, float ALPHA, const gsl_matrix_float * A, gsl_matrix_float * B) - Function: int gsl_blas_dtrsm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, double ALPHA, const gsl_matrix * A, gsl_matrix * B) - Function: int gsl_blas_ctrsm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const gsl_complex_float ALPHA, const gsl_matrix_complex_float * A, gsl_matrix_complex_float * B) - Function: int gsl_blas_ztrsm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const gsl_complex ALPHA, const gsl_matrix_complex * A, gsl_matrix_complex * B) These functions compute the inverse-matrix matrix product B = \alpha op(inv(A))B for SIDE is `CblasLeft' and B = \alpha B op(inv(A)) for SIDE is `CblasRight'. The matrix A is triangular and op(A) = A, A^T, A^H for TRANSA = `CblasNoTrans', `CblasTrans', `CblasConjTrans' When UPLO is `CblasUpper' then the upper triangle of A is used, and when UPLO is `CblasLower' then the lower triangle of A is used. If DIAG is `CblasNonUnit' then the diagonal of A is used, but if DIAG is `CblasUnit' then the diagonal elements of the matrix A are taken as unity and are not referenced. - Function: int gsl_blas_ssyrk (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANS, float ALPHA, const gsl_matrix_float * A, float BETA, gsl_matrix_float * C) - Function: int gsl_blas_dsyrk (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANS, double ALPHA, const gsl_matrix * A, double BETA, gsl_matrix * C) - Function: int gsl_blas_csyrk (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANS, const gsl_complex_float ALPHA, const gsl_matrix_complex_float * A, const gsl_complex_float BETA, gsl_matrix_complex_float * C) - Function: int gsl_blas_zsyrk (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANS, const gsl_complex ALPHA, const gsl_matrix_complex * A, const gsl_complex BETA, gsl_matrix_complex * C) These functions compute a rank-k update of the symmetric matrix C, C = \alpha A A^T + \beta C when TRANS is `CblasNoTrans' and C = \alpha A^T A + \beta C when TRANS is `CblasTrans'. Since the matrix C is symmetric only its upper half or lower half need to be stored. When UPLO is `CblasUpper' then the upper triangle and diagonal of C are used, and when UPLO is `CblasLower' then the lower triangle and diagonal of C are used. - Function: int gsl_blas_cherk (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANS, float ALPHA, const gsl_matrix_complex_float * A, float BETA, gsl_matrix_complex_float * C) - Function: int gsl_blas_zherk (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANS, double ALPHA, const gsl_matrix_complex * A, double BETA, gsl_matrix_complex * C) These functions compute a rank-k update of the hermitian matrix C, C = \alpha A A^H + \beta C when TRANS is `CblasNoTrans' and C = \alpha A^H A + \beta C when TRANS is `CblasTrans'. Since the matrix C is hermitian only its upper half or lower half need to be stored. When UPLO is `CblasUpper' then the upper triangle and diagonal of C are used, and when UPLO is `CblasLower' then the lower triangle and diagonal of C are used. The imaginary elements of the diagonal are automatically set to zero. - Function: int gsl_blas_ssyr2k (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANS, float ALPHA, const gsl_matrix_float * A, const gsl_matrix_float * B, float BETA, gsl_matrix_float * C) - Function: int gsl_blas_dsyr2k (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANS, double ALPHA, const gsl_matrix * A, const gsl_matrix * B, double BETA, gsl_matrix * C) - Function: int gsl_blas_csyr2k (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANS, const gsl_complex_float ALPHA, const gsl_matrix_complex_float * A, const gsl_matrix_complex_float * B, const gsl_complex_float BETA, gsl_matrix_complex_float * C) - Function: int gsl_blas_zsyr2k (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANS, const gsl_complex ALPHA, const gsl_matrix_complex * A, const gsl_matrix_complex * B, const gsl_complex BETA, gsl_matrix_complex * C) These functions compute a rank-2k update of the symmetric matrix C, C = \alpha A B^T + \alpha B A^T + \beta C when TRANS is `CblasNoTrans' and C = \alpha A^T B + \alpha B^T A + \beta C when TRANS is `CblasTrans'. Since the matrix C is symmetric only its upper half or lower half need to be stored. When UPLO is `CblasUpper' then the upper triangle and diagonal of C are used, and when UPLO is `CblasLower' then the lower triangle and diagonal of C are used. - Function: int gsl_blas_cher2k (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANS, const gsl_complex_float ALPHA, const gsl_matrix_complex_float * A, const gsl_matrix_complex_float * B, float BETA, gsl_matrix_complex_float * C) - Function: int gsl_blas_zher2k (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t TRANS, const gsl_complex ALPHA, const gsl_matrix_complex * A, const gsl_matrix_complex * B, double BETA, gsl_matrix_complex * C) These functions compute a rank-2k update of the hermitian matrix C, C = \alpha A B^H + \alpha^* B A^H + \beta C when TRANS is `CblasNoTrans' and C = \alpha A^H B + \alpha^* B^H A + \beta C when TRANS is `CblasConjTrans'. Since the matrix C is hermitian only its upper half or lower half need to be stored. When UPLO is `CblasUpper' then the upper triangle and diagonal of C are used, and when UPLO is `CblasLower' then the lower triangle and diagonal of C are used. The imaginary elements of the diagonal are automatically set to zero.  File: gsl-ref.info, Node: BLAS Examples, Next: BLAS References and Further Reading, Prev: GSL BLAS Interface, Up: BLAS Support Examples ======== The following program computes the product of two matrices using the Level-3 BLAS function DGEMM, [ 0.11 0.12 0.13 ] [ 1011 1012 ] [ 367.76 368.12 ] [ 0.21 0.22 0.23 ] [ 1021 1022 ] = [ 674.06 674.72 ] [ 1031 1032 ] The matrices are stored in row major order, according to the C convention for arrays. #include #include int main (void) { double a[] = { 0.11, 0.12, 0.13, 0.21, 0.22, 0.23 }; double b[] = { 1011, 1012, 1021, 1022, 1031, 1032 }; double c[] = { 0.00, 0.00, 0.00, 0.00 }; gsl_matrix_view A = gsl_matrix_view_array(a, 2, 3); gsl_matrix_view B = gsl_matrix_view_array(b, 3, 2); gsl_matrix_view C = gsl_matrix_view_array(c, 2, 2); /* Compute C = A B */ gsl_blas_dgemm (CblasNoTrans, CblasNoTrans, 1.0, &A.matrix, &B.matrix, 0.0, &C.matrix); printf ("[ %g, %g\n", c[0], c[1]); printf (" %g, %g ]\n", c[2], c[3]); return 0; } Here is the output from the program, $ ./a.out [ 367.76, 368.12 674.06, 674.72 ]  File: gsl-ref.info, Node: BLAS References and Further Reading, Prev: BLAS Examples, Up: BLAS Support References and Further Reading ============================== Information on the BLAS standards, including both the legacy and draft interface standards, is available online from the BLAS Homepage and BLAS Technical Forum web-site. `BLAS Homepage' `http://www.netlib.org/blas/' `BLAS Technical Forum' `http://www.netlib.org/cgi-bin/checkout/blast/blast.pl' The following papers contain the specifications for Level 1, Level 2 and Level 3 BLAS. C. Lawson, R. Hanson, D. Kincaid, F. Krogh, "Basic Linear Algebra Subprograms for Fortran Usage", `ACM Transactions on Mathematical Software', Vol. 5 (1979), Pages 308-325. J.J. Dongarra, J. DuCroz, S. Hammarling, R. Hanson, "An Extended Set of Fortran Basic Linear Algebra Subprograms", `ACM Transactions on Mathematical Software', Vol. 14, No. 1 (1988), Pages 1-32. J.J. Dongarra, I. Duff, J. DuCroz, S. Hammarling, "A Set of Level 3 Basic Linear Algebra Subprograms", `ACM Transactions on Mathematical Software', Vol. 16 (1990), Pages 1-28. Postscript versions of the latter two papers are available from `http://www.netlib.org/blas/'. A CBLAS wrapper for Fortran BLAS libraries is available from the same location.  File: gsl-ref.info, Node: Linear Algebra, Next: Eigensystems, Prev: BLAS Support, Up: Top Linear Algebra ************** This chapter describes functions for solving linear systems. The library provides simple linear algebra operations which operate directly on the `gsl_vector' and `gsl_matrix' objects. These are intended for use with "small" systems where simple algorithms are acceptable. Anyone interested in large systems will want to use the sophisticated routines found in LAPACK. The Fortran version of LAPACK is recommended as the standard package for large-scale linear algebra. It supports blocked algorithms, specialized data representations and other optimizations. The functions described in this chapter are declared in the header file `gsl_linalg.h'. * Menu: * LU Decomposition:: * QR Decomposition:: * QR Decomposition with Column Pivoting:: * Singular Value Decomposition:: * Cholesky Decomposition:: * Tridiagonal Decomposition of Real Symmetric Matrices:: * Tridiagonal Decomposition of Hermitian Matrices:: * Bidiagonalization:: * Householder Transformations:: * Householder solver for linear systems:: * Tridiagonal Systems:: * Linear Algebra Examples:: * Linear Algebra References and Further Reading::  File: gsl-ref.info, Node: LU Decomposition, Next: QR Decomposition, Up: Linear Algebra LU Decomposition ================ A general square matrix A has an LU decomposition into upper and lower triangular matrices, P A = L U where P is a permutation matrix, L is unit lower triangular matrix and U is upper triangular matrix. For square matrices this decomposition can be used to convert the linear system A x = b into a pair of triangular systems (L y = P b, U x = y), which can be solved by forward and back-substitution. Note that the LU decomposition is valid for singular matrices. - Function: int gsl_linalg_LU_decomp (gsl_matrix * A, gsl_permutation * P, int * SIGNUM) - Function: int gsl_linalg_complex_LU_decomp (gsl_matrix_complex * A, gsl_permutation * P, int * SIGNUM) These functions factorize the square matrix A into the LU decomposition PA = LU. On output the diagonal and upper triangular part of the input matrix A contain the matrix U. The lower triangular part of the input matrix (excluding the diagonal) contains L. The diagonal elements of L are unity, and are not stored. The permutation matrix P is encoded in the permutation P. The j-th column of the matrix P is given by the k-th column of the identity matrix, where k = p_j the j-th element of the permutation vector. The sign of the permutation is given by SIGNUM. It has the value (-1)^n, where n is the number of interchanges in the permutation. The algorithm used in the decomposition is Gaussian Elimination with partial pivoting (Golub & Van Loan, `Matrix Computations', Algorithm 3.4.1). - Function: int gsl_linalg_LU_solve (const gsl_matrix * LU, const gsl_permutation * P, const gsl_vector * B, gsl_vector * X) - Function: int gsl_linalg_complex_LU_solve (const gsl_matrix_complex * LU, const gsl_permutation * P, const gsl_vector_complex * B, gsl_vector_complex * X) These functions solve the square system A x = b using the LU decomposition of A into (LU, P) given by `gsl_linalg_LU_decomp' or `gsl_linalg_complex_LU_decomp'. - Function: int gsl_linalg_LU_svx (const gsl_matrix * LU, const gsl_permutation * P, gsl_vector * X) - Function: int gsl_linalg_complex_LU_svx (const gsl_matrix_complex * LU, const gsl_permutation * P, gsl_vector_complex * X) These functions solve the square system A x = b in-place using the LU decomposition of A into (LU,P). On input X should contain the right-hand side b, which is replaced by the solution on output. - Function: int gsl_linalg_LU_refine (const gsl_matrix * A, const gsl_matrix * LU, const gsl_permutation * P, const gsl_vector * B, gsl_vector * X, gsl_vector * RESIDUAL) - Function: int gsl_linalg_complex_LU_refine (const gsl_matrix_complex * A, const gsl_matrix_complex * LU, const gsl_permutation * P, const gsl_vector_complex * B, gsl_vector_complex * X, gsl_vector_complex * RESIDUAL) These functions apply an iterative improvement to X, the solution of A x = b, using the LU decomposition of A into (LU,P). The initial residual r = A x - b is also computed and stored in RESIDUAL. - Function: int gsl_linalg_LU_invert (const gsl_matrix * LU, const gsl_permutation * P, gsl_matrix * INVERSE) - Function: int gsl_linalg_complex_LU_invert (const gsl_matrix_complex * LU, const gsl_permutation * P, gsl_matrix_complex * INVERSE) These functions compute the inverse of a matrix A from its LU decomposition (LU,P), storing the result in the matrix INVERSE. The inverse is computed by solving the system A x = b for each column of the identity matrix. It is preferable to avoid direct use of the inverse whenever possible, as the linear solver functions can obtain the same result more efficiently and reliably (consult any introductory textbook on numerical linear algebra for details). - Function: double gsl_linalg_LU_det (gsl_matrix * LU, int SIGNUM) - Function: gsl_complex gsl_linalg_complex_LU_det (gsl_matrix_complex * LU, int SIGNUM) These functions compute the determinant of a matrix A from its LU decomposition, LU. The determinant is computed as the product of the diagonal elements of U and the sign of the row permutation SIGNUM. - Function: double gsl_linalg_LU_lndet (gsl_matrix * LU) - Function: double gsl_linalg_complex_LU_lndet (gsl_matrix_complex * LU) These functions compute the logarithm of the absolute value of the determinant of a matrix A, \ln|\det(A)|, from its LU decomposition, LU. This function may be useful if the direct computation of the determinant would overflow or underflow. - Function: int gsl_linalg_LU_sgndet (gsl_matrix * LU, int SIGNUM) - Function: gsl_complex gsl_linalg_complex_LU_sgndet (gsl_matrix_complex * LU, int SIGNUM) These functions compute the sign or phase factor of the determinant of a matrix A, \det(A)/|\det(A)|, from its LU decomposition, LU.  File: gsl-ref.info, Node: QR Decomposition, Next: QR Decomposition with Column Pivoting, Prev: LU Decomposition, Up: Linear Algebra QR Decomposition ================ A general rectangular M-by-N matrix A has a QR decomposition into the product of an orthogonal M-by-M square matrix Q (where Q^T Q = I) and an M-by-N right-triangular matrix R, A = Q R This decomposition can be used to convert the linear system A x = b into the triangular system R x = Q^T b, which can be solved by back-substitution. Another use of the QR decomposition is to compute an orthonormal basis for a set of vectors. The first N columns of Q form an orthonormal basis for the range of A, ran(A), when A has full column rank. - Function: int gsl_linalg_QR_decomp (gsl_matrix * A, gsl_vector * TAU) This function factorizes the M-by-N matrix A into the QR decomposition A = Q R. On output the diagonal and upper triangular part of the input matrix contain the matrix R. The vector TAU and the columns of the lower triangular part of the matrix A contain the Householder coefficients and Householder vectors which encode the orthogonal matrix Q. The vector TAU must be of length k=\min(M,N). The matrix Q is related to these components by, Q = Q_k ... Q_2 Q_1 where Q_i = I - \tau_i v_i v_i^T and v_i is the Householder vector v_i = (0,...,1,A(i+1,i),A(i+2,i),...,A(m,i)). This is the same storage scheme as used by LAPACK. The algorithm used to perform the decomposition is Householder QR (Golub & Van Loan, `Matrix Computations', Algorithm 5.2.1). - Function: int gsl_linalg_QR_solve (const gsl_matrix * QR, const gsl_vector * TAU, const gsl_vector * B, gsl_vector * X) This function solves the square system A x = b using the QR decomposition of A into (QR, TAU) given by `gsl_linalg_QR_decomp'. The least-squares solution for rectangular systems can be found using `gsl_linalg_QR_lssolve'. - Function: int gsl_linalg_QR_svx (const gsl_matrix * QR, const gsl_vector * TAU, gsl_vector * X) This function solves the square system A x = b in-place using the QR decomposition of A into (QR,TAU) given by `gsl_linalg_QR_decomp'. On input X should contain the right-hand side b, which is replaced by the solution on output. - Function: int gsl_linalg_QR_lssolve (const gsl_matrix * QR, const gsl_vector * TAU, const gsl_vector * B, gsl_vector * X, gsl_vector * RESIDUAL) This function finds the least squares solution to the overdetermined system A x = b where the matrix A has more rows than columns. The least squares solution minimizes the Euclidean norm of the residual, ||Ax - b||.The routine uses the QR decomposition of A into (QR, TAU) given by `gsl_linalg_QR_decomp'. The solution is returned in X. The residual is computed as a by-product and stored in RESIDUAL. - Function: int gsl_linalg_QR_QTvec (const gsl_matrix * QR, const gsl_vector * TAU, gsl_vector * V) This function applies the matrix Q^T encoded in the decomposition (QR,TAU) to the vector V, storing the result Q^T v in V. The matrix multiplication is carried out directly using the encoding of the Householder vectors without needing to form the full matrix Q^T. - Function: int gsl_linalg_QR_Qvec (const gsl_matrix * QR, const gsl_vector * TAU, gsl_vector * V) This function applies the matrix Q encoded in the decomposition (QR,TAU) to the vector V, storing the result Q v in V. The matrix multiplication is carried out directly using the encoding of the Householder vectors without needing to form the full matrix Q. - Function: int gsl_linalg_QR_Rsolve (const gsl_matrix * QR, const gsl_vector * B, gsl_vector * X) This function solves the triangular system R x = b for X. It may be useful if the product b' = Q^T b has already been computed using `gsl_linalg_QR_QTvec'. - Function: int gsl_linalg_QR_Rsvx (const gsl_matrix * QR, gsl_vector * X) This function solves the triangular system R x = b for X in-place. On input X should contain the right-hand side b and is replaced by the solution on output. This function may be useful if the product b' = Q^T b has already been computed using `gsl_linalg_QR_QTvec'. - Function: int gsl_linalg_QR_unpack (const gsl_matrix * QR, const gsl_vector * TAU, gsl_matrix * Q, gsl_matrix * R) This function unpacks the encoded QR decomposition (QR,TAU) into the matrices Q and R, where Q is M-by-M and R is M-by-N. - Function: int gsl_linalg_QR_QRsolve (gsl_matrix * Q, gsl_matrix * R, const gsl_vector * B, gsl_vector * X) This function solves the system R x = Q^T b for X. It can be used when the QR decomposition of a matrix is available in unpacked form as (Q, R). - Function: int gsl_linalg_QR_update (gsl_matrix * Q, gsl_matrix * R, gsl_vector * W, const gsl_vector * V) This function performs a rank-1 update w v^T of the QR decomposition (Q, R). The update is given by Q'R' = Q R + w v^T where the output matrices Q' and R' are also orthogonal and right triangular. Note that W is destroyed by the update. - Function: int gsl_linalg_R_solve (const gsl_matrix * R, const gsl_vector * B, gsl_vector * X) This function solves the triangular system R x = b for the N-by-N matrix R. - Function: int gsl_linalg_R_svx (const gsl_matrix * R, gsl_vector * X) This function solves the triangular system R x = b in-place. On input X should contain the right-hand side b, which is replaced by the solution on output.  File: gsl-ref.info, Node: QR Decomposition with Column Pivoting, Next: Singular Value Decomposition, Prev: QR Decomposition, Up: Linear Algebra QR Decomposition with Column Pivoting ===================================== The QR decomposition can be extended to the rank deficient case by introducing a column permutation P, A P = Q R The first r columns of Q form an orthonormal basis for the range of A for a matrix with column rank r. This decomposition can also be used to convert the linear system A x = b into the triangular system R y = Q^T b, x = P y, which can be solved by back-substitution and permutation. We denote the QR decomposition with column pivoting by QRP^T since A = Q R P^T. - Function: int gsl_linalg_QRPT_decomp (gsl_matrix * A, gsl_vector * TAU, gsl_permutation * P, int * SIGNUM, gsl_vector * NORM) This function factorizes the M-by-N matrix A into the QRP^T decomposition A = Q R P^T. On output the diagonal and upper triangular part of the input matrix contain the matrix R. The permutation matrix P is stored in the permutation P. The sign of the permutation is given by SIGNUM. It has the value (-1)^n, where n is the number of interchanges in the permutation. The vector TAU and the columns of the lower triangular part of the matrix A contain the Householder coefficients and vectors which encode the orthogonal matrix Q. The vector TAU must be of length k=\min(M,N). The matrix Q is related to these components by, Q = Q_k ... Q_2 Q_1 where Q_i = I - \tau_i v_i v_i^T and v_i is the Householder vector v_i = (0,...,1,A(i+1,i),A(i+2,i),...,A(m,i)). This is the same storage scheme as used by LAPACK. The vector NORM is a workspace of length N used for column pivoting. The algorithm used to perform the decomposition is Householder QR with column pivoting (Golub & Van Loan, `Matrix Computations', Algorithm 5.4.1). - Function: int gsl_linalg_QRPT_decomp2 (const gsl_matrix * A, gsl_matrix * Q, gsl_matrix * R, gsl_vector * TAU, gsl_permutation * P, int * SIGNUM, gsl_vector * NORM) This function factorizes the matrix A into the decomposition A = Q R P^T without modifying A itself and storing the output in the separate matrices Q and R. - Function: int gsl_linalg_QRPT_solve (const gsl_matrix * QR, const gsl_vector * TAU, const gsl_permutation * P, const gsl_vector * B, gsl_vector * X) This function solves the square system A x = b using the QRP^T decomposition of A into (QR, TAU, P) given by `gsl_linalg_QRPT_decomp'. - Function: int gsl_linalg_QRPT_svx (const gsl_matrix * QR, const gsl_vector * TAU, const gsl_permutation * P, gsl_vector * X) This function solves the square system A x = b in-place using the QRP^T decomposition of A into (QR,TAU,P). On input X should contain the right-hand side b, which is replaced by the solution on output. - Function: int gsl_linalg_QRPT_QRsolve (const gsl_matrix * Q, const gsl_matrix * R, const gsl_permutation * P, const gsl_vector * B, gsl_vector * X) This function solves the square system R P^T x = Q^T b for X. It can be used when the QR decomposition of a matrix is available in unpacked form as (Q, R). - Function: int gsl_linalg_QRPT_update (gsl_matrix * Q, gsl_matrix * R, const gsl_permutation * P, gsl_vector * U, const gsl_vector * V) This function performs a rank-1 update w v^T of the QRP^T decomposition (Q, R, P). The update is given by Q'R' = Q R + w v^T where the output matrices Q' and R' are also orthogonal and right triangular. Note that W is destroyed by the update. The permutation P is not changed. - Function: int gsl_linalg_QRPT_Rsolve (const gsl_matrix * QR, const gsl_permutation * P, const gsl_vector * B, gsl_vector * X) This function solves the triangular system R P^T x = b for the N-by-N matrix R contained in QR. - Function: int gsl_linalg_QRPT_Rsvx (const gsl_matrix * QR, const gsl_permutation * P, gsl_vector * X) This function solves the triangular system R P^T x = b in-place for the N-by-N matrix R contained in QR. On input X should contain the right-hand side b, which is replaced by the solution on output.  File: gsl-ref.info, Node: Singular Value Decomposition, Next: Cholesky Decomposition, Prev: QR Decomposition with Column Pivoting, Up: Linear Algebra Singular Value Decomposition ============================ A general rectangular M-by-N matrix A has a singular value decomposition (SVD) into the product of an M-by-N orthogonal matrix U, an N-by-N diagonal matrix of singular values S and the transpose of an N-by-N orthogonal square matrix V, A = U S V^T The singular values \sigma_i = S_{ii} are all non-negative and are generally chosen to form a non-increasing sequence \sigma_1 >= \sigma_2 >= ... >= \sigma_N >= 0. The singular value decomposition of a matrix has many practical uses. The condition number of the matrix is given by the ratio of the largest singular value to the smallest singular value. The presence of a zero singular value indicates that the matrix is singular. The number of non-zero singular values indicates the rank of the matrix. In practice singular value decomposition of a rank-deficient matrix will not produce exact zeroes for singular values, due to finite numerical precision. Small singular values should be edited by choosing a suitable tolerance. - Function: int gsl_linalg_SV_decomp (gsl_matrix * A, gsl_matrix * V, gsl_vector * S, gsl_vector * WORK) This function factorizes the M-by-N matrix A into the singular value decomposition A = U S V^T for M >= N. On output the matrix A is replaced by U. The diagonal elements of the singular value matrix S are stored in the vector S. The singular values are non-negative and form a non-increasing sequence from S_1 to S_N. The matrix V contains the elements of V in untransposed form. To form the product U S V^T it is necessary to take the transpose of V. A workspace of length N is required in WORK. This routine uses the Golub-Reinsch SVD algorithm. - Function: int gsl_linalg_SV_decomp_mod (gsl_matrix * A, gsl_matrix * X, gsl_matrix * V, gsl_vector * S, gsl_vector * WORK) This function computes the SVD using the modified Golub-Reinsch algorithm, which is faster for M>>N. It requires the vector WORK of length N and the N-by-N matrix X as additional working space. - Function: int gsl_linalg_SV_decomp_jacobi (gsl_matrix * A, gsl_matrix * V, gsl_vector * S) This function computes the SVD of the M-by-N matrix A using one-sided Jacobi orthogonalization for M >= N. The Jacobi method can compute singular values to higher relative accuracy than Golub-Reinsch algorithms (see references for details). - Function: int gsl_linalg_SV_solve (gsl_matrix * U, gsl_matrix * V, gsl_vector * S, const gsl_vector * B, gsl_vector * X) This function solves the system A x = b using the singular value decomposition (U, S, V) of A given by `gsl_linalg_SV_decomp'. Only non-zero singular values are used in computing the solution. The parts of the solution corresponding to singular values of zero are ignored. Other singular values can be edited out by setting them to zero before calling this function. In the over-determined case where A has more rows than columns the system is solved in the least squares sense, returning the solution X which minimizes ||A x - b||_2.  File: gsl-ref.info, Node: Cholesky Decomposition, Next: Tridiagonal Decomposition of Real Symmetric Matrices, Prev: Singular Value Decomposition, Up: Linear Algebra Cholesky Decomposition ====================== A symmetric, positive definite square matrix A has a Cholesky decomposition into a product of a lower triangular matrix L and its transpose L^T, A = L L^T This is sometimes referred to as taking the square-root of a matrix. The Cholesky decomposition can only be carried out when all the eigenvalues of the matrix are positive. This decomposition can be used to convert the linear system A x = b into a pair of triangular systems (L y = b, L^T x = y), which can be solved by forward and back-substitution. - Function: int gsl_linalg_cholesky_decomp (gsl_matrix * A) This function factorizes the positive-definite symmetric square matrix A into the Cholesky decomposition A = L L^T. On output the diagonal and lower triangular part of the input matrix A contain the matrix L. The upper triangular part of the input matrix contains L^T, the diagonal terms being identical for both L and L^T. If the matrix is not positive-definite then the decomposition will fail, returning the error code `GSL_EDOM'. - Function: int gsl_linalg_cholesky_solve (const gsl_matrix * CHOLESKY, const gsl_vector * B, gsl_vector * X) This function solves the system A x = b using the Cholesky decomposition of A into the matrix CHOLESKY given by `gsl_linalg_cholesky_decomp'. - Function: int gsl_linalg_cholesky_svx (const gsl_matrix * CHOLESKY, gsl_vector * X) This function solves the system A x = b in-place using the Cholesky decomposition of A into the matrix CHOLESKY given by `gsl_linalg_cholesky_decomp'. On input X should contain the right-hand side b, which is replaced by the solution on output.  File: gsl-ref.info, Node: Tridiagonal Decomposition of Real Symmetric Matrices, Next: Tridiagonal Decomposition of Hermitian Matrices, Prev: Cholesky Decomposition, Up: Linear Algebra Tridiagonal Decomposition of Real Symmetric Matrices ==================================================== A symmetric matrix A can be factorized by similarity transformations into the form, A = Q T Q^T where Q is an orthogonal matrix and T is a symmetric tridiagonal matrix. - Function: int gsl_linalg_symmtd_decomp (gsl_matrix * A, gsl_vector * TAU) This function factorizes the symmetric square matrix A into the symmetric tridiagonal decomposition Q T Q^T. On output the diagonal and subdiagonal part of the input matrix A contain the tridiagonal matrix T. The remaining lower triangular part of the input matrix contains the Householder vectors which, together with the Householder coefficients TAU, encode the orthogonal matrix Q. This storage scheme is the same as used by LAPACK. The upper triangular part of A is not referenced. - Function: int gsl_linalg_symmtd_unpack (const gsl_matrix * A, const gsl_vector * TAU, gsl_matrix * Q, gsl_vector * DIAG, gsl_vector * SUBDIAG) This function unpacks the encoded symmetric tridiagonal decomposition (A, TAU) obtained from `gsl_linalg_symmtd_decomp' into the orthogonal matrix Q, the vector of diagonal elements DIAG and the vector of subdiagonal elements SUBDIAG. - Function: int gsl_linalg_symmtd_unpack_T (const gsl_matrix * A, gsl_vector * DIAG, gsl_vector * SUBDIAG) This function unpacks the diagonal and subdiagonal of the encoded symmetric tridiagonal decomposition (A, TAU) obtained from `gsl_linalg_symmtd_decomp' into the vectors DIAG and SUBDIAG.  File: gsl-ref.info, Node: Tridiagonal Decomposition of Hermitian Matrices, Next: Bidiagonalization, Prev: Tridiagonal Decomposition of Real Symmetric Matrices, Up: Linear Algebra Tridiagonal Decomposition of Hermitian Matrices =============================================== A hermitian matrix A can be factorized by similarity transformations into the form, A = U T U^T where U is a unitary matrix and T is a real symmetric tridiagonal matrix. - Function: int gsl_linalg_hermtd_decomp (gsl_matrix_complex * A, gsl_vector_complex * TAU) This function factorizes the hermitian matrix A into the symmetric tridiagonal decomposition U T U^T. On output the real parts of the diagonal and subdiagonal part of the input matrix A contain the tridiagonal matrix T. The remaining lower triangular part of the input matrix contains the Householder vectors which, together with the Householder coefficients TAU, encode the orthogonal matrix Q. This storage scheme is the same as used by LAPACK. The upper triangular part of A and imaginary parts of the diagonal are not referenced. - Function: int gsl_linalg_hermtd_unpack (const gsl_matrix_complex * A, const gsl_vector_complex * TAU, gsl_matrix_complex * Q, gsl_vector * DIAG, gsl_vector * SUBDIAG) This function unpacks the encoded tridiagonal decomposition (A, TAU) obtained from `gsl_linalg_hermtd_decomp' into the unitary matrix U, the real vector of diagonal elements DIAG and the real vector of subdiagonal elements SUBDIAG. - Function: int gsl_linalg_hermtd_unpack_T (const gsl_matrix_complex * A, gsl_vector * DIAG, gsl_vector * SUBDIAG) This function unpacks the diagonal and subdiagonal of the encoded tridiagonal decomposition (A, TAU) obtained from the `gsl_linalg_hermtd_decomp' into the real vectors DIAG and SUBDIAG.  File: gsl-ref.info, Node: Bidiagonalization, Next: Householder Transformations, Prev: Tridiagonal Decomposition of Hermitian Matrices, Up: Linear Algebra Bidiagonalization ================= A general matrix A can be factorized by similarity transformations into the form, A = U B V^T where U and V are orthogonal matrices and B is a N-by-N bidiagonal matrix with non-zero entries only on the diagonal and superdiagonal. The size of U is M-by-N and the size of V is N-by-N. - Function: int gsl_linalg_bidiag_decomp (gsl_matrix * A, gsl_vector * TAU_U, gsl_vector * TAU_V) This function factorizes the M-by-N matrix A into bidiagonal form U B V^T. The diagonal and superdiagonal of the matrix B are stored in the diagonal and superdiagonal of A. The orthogonal matrices U and V are stored as compressed Householder vectors in the remaining elements of A. The Householder coefficients are stored in the vectors TAU_U and TAU_V. The length of TAU_U must equal the number of elements in the diagonal of A and the length of TAU_V should be one element shorter. - Function: int gsl_linalg_bidiag_unpack (const gsl_matrix * A, const gsl_vector * TAU_U, gsl_matrix * U, const gsl_vector * TAU_V, gsl_matrix * V, gsl_vector * DIAG, gsl_vector * SUPERDIAG) This function unpacks the bidiagonal decomposition of A given by `gsl_linalg_bidiag_decomp', (A, TAU_U, TAU_V) into the separate orthogonal matrices U, V and the diagonal vector DIAG and superdiagonal SUPERDIAG. Note that U is stored as a compact M-by-N orthogonal matrix satisfying U^T U = I for efficiency. - Function: int gsl_linalg_bidiag_unpack2 (gsl_matrix * A, gsl_vector * TAU_U, gsl_vector * TAU_V, gsl_matrix * V) This function unpacks the bidiagonal decomposition of A given by `gsl_linalg_bidiag_decomp', (A, TAU_U, TAU_V) into the separate orthogonal matrices U, V and the diagonal vector DIAG and superdiagonal SUPERDIAG. The matrix U is stored in-place in A. - Function: int gsl_linalg_bidiag_unpack_B (const gsl_matrix * A, gsl_vector * DIAG, gsl_vector * SUPERDIAG) This function unpacks the diagonal and superdiagonal of the bidiagonal decomposition of A given by `gsl_linalg_bidiag_decomp', into the diagonal vector DIAG and superdiagonal vector SUPERDIAG.  File: gsl-ref.info, Node: Householder Transformations, Next: Householder solver for linear systems, Prev: Bidiagonalization, Up: Linear Algebra Householder Transformations =========================== A Householder transformation is a rank-1 modification of the identity matrix which can be used to zero out selected elements of a vector. A Householder matrix P takes the form, P = I - \tau v v^T where v is a vector (called the "Householder vector") and \tau = 2/(v^T v). The functions described in this section use the rank-1 structure of the Householder matrix to create and apply Householder transformations efficiently. - Function: double gsl_linalg_householder_transform (gsl_vector * V) This function prepares a Householder transformation P = I - \tau v v^T which can be used to zero all the elements of the input vector except the first. On output the transformation is stored in the vector V and the scalar \tau is returned. - Function: int gsl_linalg_householder_hm (double tau, const gsl_vector * v, gsl_matrix * A) This function applies the Householder matrix P defined by the scalar TAU and the vector V to the left-hand side of the matrix A. On output the result P A is stored in A. - Function: int gsl_linalg_householder_mh (double tau, const gsl_vector * v, gsl_matrix * A) This function applies the Householder matrix P defined by the scalar TAU and the vector V to the right-hand side of the matrix A. On output the result A P is stored in A. - Function: int gsl_linalg_householder_hv (double tau, const gsl_vector * v, gsl_vector * w) This function applies the Householder transformation P defined by the scalar TAU and the vector V to the vector W. On output the result P w is stored in W.  File: gsl-ref.info, Node: Householder solver for linear systems, Next: Tridiagonal Systems, Prev: Householder Transformations, Up: Linear Algebra Householder solver for linear systems ===================================== - Function: int gsl_linalg_HH_solve (gsl_matrix * A, const gsl_vector * B, gsl_vector * X) This function solves the system A x = b directly using Householder transformations. On output the solution is stored in X and B is not modified. The matrix A is destroyed by the Householder transformations. - Function: int gsl_linalg_HH_svx (gsl_matrix * A, gsl_vector * X) This function solves the system A x = b in-place using Householder transformations. On input X should contain the right-hand side b, which is replaced by the solution on output. The matrix A is destroyed by the Householder transformations.  File: gsl-ref.info, Node: Tridiagonal Systems, Next: Linear Algebra Examples, Prev: Householder solver for linear systems, Up: Linear Algebra Tridiagonal Systems =================== - Function: int gsl_linalg_solve_tridiag (const gsl_vector * DIAG, const gsl_vector * E, const gsl_vector * F, const gsl_vector * B, gsl_vector * X) This function solves the general N-by-N system A x = b where A is tridiagonal (N >= 2). The super-diagonal and sub-diagonal vectors E and F must be one element shorter than the diagonal vector DIAG. The form of A for the 4-by-4 case is shown below, A = ( d_0 e_0 0 0 ) ( f_0 d_1 e_1 0 ) ( 0 f_1 d_2 e_2 ) ( 0 0 f_2 d_3 ) - Function: int gsl_linalg_solve_symm_tridiag (const gsl_vector * DIAG, const gsl_vector * E, const gsl_vector * B, gsl_vector * X) This function solves the general N-by-N system A x = b where A is symmetric tridiagonal (N >= 2). The off-diagonal vector E must be one element shorter than the diagonal vector DIAG. The form of A for the 4-by-4 case is shown below, A = ( d_0 e_0 0 0 ) ( e_0 d_1 e_1 0 ) ( 0 e_1 d_2 e_2 ) ( 0 0 e_2 d_3 ) - Function: int gsl_linalg_solve_cyc_tridiag (const gsl_vector * DIAG, const gsl_vector * E, const gsl_vector * F, const gsl_vector * B, gsl_vector * X) This function solves the general N-by-N system A x = b where A is cyclic tridiagonal (N >= 3). The cyclic super-diagonal and sub-diagonal vectors E and F must have the same number of elements as the diagonal vector DIAG. The form of A for the 4-by-4 case is shown below, A = ( d_0 e_0 0 f_3 ) ( f_0 d_1 e_1 0 ) ( 0 f_1 d_2 e_2 ) ( e_3 0 f_2 d_3 ) - Function: int gsl_linalg_solve_symm_cyc_tridiag (const gsl_vector * DIAG, const gsl_vector * E, const gsl_vector * B, gsl_vector * X) This function solves the general N-by-N system A x = b where A is symmetric cyclic tridiagonal (N >= 3). The cyclic off-diagonal vector E must have the same number of elements as the diagonal vector DIAG. The form of A for the 4-by-4 case is shown below, A = ( d_0 e_0 0 e_3 ) ( e_0 d_1 e_1 0 ) ( 0 e_1 d_2 e_2 ) ( e_3 0 e_2 d_3 )  File: gsl-ref.info, Node: Linear Algebra Examples, Next: Linear Algebra References and Further Reading, Prev: Tridiagonal Systems, Up: Linear Algebra Examples ======== The following program solves the linear system A x = b. The system to be solved is, [ 0.18 0.60 0.57 0.96 ] [x0] [1.0] [ 0.41 0.24 0.99 0.58 ] [x1] = [2.0] [ 0.14 0.30 0.97 0.66 ] [x2] [3.0] [ 0.51 0.13 0.19 0.85 ] [x3] [4.0] and the solution is found using LU decomposition of the matrix A. #include #include int main (void) { double a_data[] = { 0.18, 0.60, 0.57, 0.96, 0.41, 0.24, 0.99, 0.58, 0.14, 0.30, 0.97, 0.66, 0.51, 0.13, 0.19, 0.85 }; double b_data[] = { 1.0, 2.0, 3.0, 4.0 }; gsl_matrix_view m = gsl_matrix_view_array (a_data, 4, 4); gsl_vector_view b = gsl_vector_view_array (b_data, 4); gsl_vector *x = gsl_vector_alloc (4); int s; gsl_permutation * p = gsl_permutation_alloc (4); gsl_linalg_LU_decomp (&m.matrix, p, &s); gsl_linalg_LU_solve (&m.matrix, p, &b.vector, x); printf ("x = \n"); gsl_vector_fprintf (stdout, x, "%g"); gsl_permutation_free (p); return 0; } Here is the output from the program, x = -4.05205 -12.6056 1.66091 8.69377 This can be verified by multiplying the solution x by the original matrix A using GNU OCTAVE, octave> A = [ 0.18, 0.60, 0.57, 0.96; 0.41, 0.24, 0.99, 0.58; 0.14, 0.30, 0.97, 0.66; 0.51, 0.13, 0.19, 0.85 ]; octave> x = [ -4.05205; -12.6056; 1.66091; 8.69377]; octave> A * x ans = 1.0000 2.0000 3.0000 4.0000 This reproduces the original right-hand side vector, b, in accordance with the equation A x = b.  File: gsl-ref.info, Node: Linear Algebra References and Further Reading, Prev: Linear Algebra Examples, Up: Linear Algebra References and Further Reading ============================== Further information on the algorithms described in this section can be found in the following book, G. H. Golub, C. F. Van Loan, `Matrix Computations' (3rd Ed, 1996), Johns Hopkins University Press, ISBN 0-8018-5414-8. The LAPACK library is described in the following manual, `LAPACK Users' Guide' (Third Edition, 1999), Published by SIAM, ISBN 0-89871-447-8. `http://www.netlib.org/lapack' The LAPACK source code can be found at the website above, along with an online copy of the users guide. The Modified Golub-Reinsch algorithm is described in the following paper, T.F. Chan, "An Improved Algorithm for Computing the Singular Value Decomposition", `ACM Transactions on Mathematical Software', 8 (1982), pp 72-83. The Jacobi algorithm for singular value decomposition is described in the following papers, J.C. Nash, "A one-sided transformation method for the singular value decomposition and algebraic eigenproblem", `Computer Journal', Volume 18, Number 1 (1973), p 74-76 James Demmel, Kresimir Veselic, "Jacobi's Method is more accurate than QR", `Lapack Working Note 15' (LAWN-15), October 1989. Available from netlib, `http://www.netlib.org/lapack/' in the `lawns' or `lawnspdf' directories.  File: gsl-ref.info, Node: Eigensystems, Next: Fast Fourier Transforms, Prev: Linear Algebra, Up: Top Eigensystems ************ This chapter describes functions for computing eigenvalues and eigenvectors of matrices. There are routines for real symmetric and complex hermitian matrices, and eigenvalues can be computed with or without eigenvectors. The algorithms used are symmetric bidiagonalization followed by QR reduction. These routines are intended for "small" systems where simple algorithms are acceptable. Anyone interested in finding eigenvalues and eigenvectors of large matrices will want to use the sophisticated routines found in LAPACK. The Fortran version of LAPACK is recommended as the standard package for large-scale linear algebra. The functions described in this chapter are declared in the header file `gsl_eigen.h'. * Menu: * Real Symmetric Matrices:: * Complex Hermitian Matrices:: * Sorting Eigenvalues and Eigenvectors:: * Eigenvalue and Eigenvector Examples:: * Eigenvalue and Eigenvector References::  File: gsl-ref.info, Node: Real Symmetric Matrices, Next: Complex Hermitian Matrices, Up: Eigensystems Real Symmetric Matrices ======================= - Function: gsl_eigen_symm_workspace * gsl_eigen_symm_alloc (const size_t N) This function allocates a workspace for computing eigenvalues of N-by-N real symmetric matrices. The size of the workspace is O(2n). - Function: void gsl_eigen_symm_free (gsl_eigen_symm_workspace * W) This function frees the memory associated with the workspace W. - Function: int gsl_eigen_symm (gsl_matrix * A, gsl_vector * EVAL, gsl_eigen_symm_workspace * W) This function computes the eigenvalues of the real symmetric matrix A. Additional workspace of the appropriate size must be provided in W. The diagonal and lower triangular part of A are destroyed during the computation, but the strict upper triangular part is not referenced. The eigenvalues are stored in the vector EVAL and are unordered. - Function: gsl_eigen_symmv_workspace * gsl_eigen_symmv_alloc (const size_t N) This function allocates a workspace for computing eigenvalues and eigenvectors of N-by-N real symmetric matrices. The size of the workspace is O(4n). - Function: void gsl_eigen_symmv_free (gsl_eigen_symmv_workspace * W) This function frees the memory associated with the workspace W. - Function: int gsl_eigen_symmv (gsl_matrix * A, gsl_vector * EVAL, gsl_matrix * EVEC, gsl_eigen_symmv_workspace * W) This function computes the eigenvalues and eigenvectors of the real symmetric matrix A. Additional workspace of the appropriate size must be provided in W. The diagonal and lower triangular part of A are destroyed during the computation, but the strict upper triangular part is not referenced. The eigenvalues are stored in the vector EVAL and are unordered. The corresponding eigenvectors are stored in the columns of the matrix EVEC. For example, the eigenvector in the first column corresponds to the first eigenvalue. The eigenvectors are guaranteed to be mutually orthogonal and normalised to unit magnitude.  File: gsl-ref.info, Node: Complex Hermitian Matrices, Next: Sorting Eigenvalues and Eigenvectors, Prev: Real Symmetric Matrices, Up: Eigensystems Complex Hermitian Matrices ========================== - Function: gsl_eigen_herm_workspace * gsl_eigen_herm_alloc (const size_t N) This function allocates a workspace for computing eigenvalues of N-by-N complex hermitian matrices. The size of the workspace is O(3n). - Function: void gsl_eigen_herm_free (gsl_eigen_herm_workspace * W) This function frees the memory associated with the workspace W. - Function: int gsl_eigen_herm (gsl_matrix_complex * A, gsl_vector * EVAL, gsl_eigen_herm_workspace * W) This function computes the eigenvalues of the complex hermitian matrix A. Additional workspace of the appropriate size must be provided in W. The diagonal and lower triangular part of A are destroyed during the computation, but the strict upper triangular part is not referenced. The imaginary parts of the diagonal are assumed to be zero and are not referenced. The eigenvalues are stored in the vector EVAL and are unordered. - Function: gsl_eigen_hermv_workspace * gsl_eigen_hermv_alloc (const size_t N) This function allocates a workspace for computing eigenvalues and eigenvectors of N-by-N complex hermitian matrices. The size of the workspace is O(5n). - Function: void gsl_eigen_hermv_free (gsl_eigen_hermv_workspace * W) This function frees the memory associated with the workspace W. - Function: int gsl_eigen_hermv (gsl_matrix_complex * A, gsl_vector * EVAL, gsl_matrix_complex * EVEC, gsl_eigen_hermv_workspace * W) This function computes the eigenvalues and eigenvectors of the complex hermitian matrix A. Additional workspace of the appropriate size must be provided in W. The diagonal and lower triangular part of A are destroyed during the computation, but the strict upper triangular part is not referenced. The imaginary parts of the diagonal are assumed to be zero and are not referenced. The eigenvalues are stored in the vector EVAL and are unordered. The corresponding complex eigenvectors are stored in the columns of the matrix EVEC. For example, the eigenvector in the first column corresponds to the first eigenvalue. The eigenvectors are guaranteed to be mutually orthogonal and normalised to unit magnitude.  File: gsl-ref.info, Node: Sorting Eigenvalues and Eigenvectors, Next: Eigenvalue and Eigenvector Examples, Prev: Complex Hermitian Matrices, Up: Eigensystems Sorting Eigenvalues and Eigenvectors ==================================== - Function: int gsl_eigen_symmv_sort (gsl_vector * EVAL, gsl_matrix * EVEC, gsl_eigen_sort_t SORT_TYPE) This function simultaneously sorts the eigenvalues stored in the vector EVAL and the corresponding real eigenvectors stored in the columns of the matrix EVEC into ascending or descending order according to the value of the parameter SORT_TYPE, `GSL_EIGEN_SORT_VAL_ASC' ascending order in numerical value `GSL_EIGEN_SORT_VAL_DESC' descending order in numerical value `GSL_EIGEN_SORT_ABS_ASC' ascending order in magnitude `GSL_EIGEN_SORT_ABS_DESC' descending order in magnitude - Function: int gsl_eigen_hermv_sort (gsl_vector * EVAL, gsl_matrix_complex * EVEC, gsl_eigen_sort_t SORT_TYPE) This function simultaneously sorts the eigenvalues stored in the vector EVAL and the corresponding complex eigenvectors stored in the columns of the matrix EVEC into ascending or descending order according to the value of the parameter SORT_TYPE as shown above.  File: gsl-ref.info, Node: Eigenvalue and Eigenvector Examples, Next: Eigenvalue and Eigenvector References, Prev: Sorting Eigenvalues and Eigenvectors, Up: Eigensystems Examples ======== The following program computes the eigenvalues and eigenvectors of the 4-th order Hilbert matrix, H(i,j) = 1/(i + j + 1). #include #include #include int main (void) { double data[] = { 1.0 , 1/2.0, 1/3.0, 1/4.0, 1/2.0, 1/3.0, 1/4.0, 1/5.0, 1/3.0, 1/4.0, 1/5.0, 1/6.0, 1/4.0, 1/5.0, 1/6.0, 1/7.0 }; gsl_matrix_view m = gsl_matrix_view_array (data, 4, 4); gsl_vector *eval = gsl_vector_alloc (4); gsl_matrix *evec = gsl_matrix_alloc (4, 4); gsl_eigen_symmv_workspace * w = gsl_eigen_symmv_alloc (4); gsl_eigen_symmv (&m.matrix, eval, evec, w); gsl_eigen_symmv_free (w); gsl_eigen_symmv_sort (eval, evec, GSL_EIGEN_SORT_ABS_ASC); { int i; for (i = 0; i < 4; i++) { double eval_i = gsl_vector_get (eval, i); gsl_vector_view evec_i = gsl_matrix_column (evec, i); printf ("eigenvalue = %g\n", eval_i); printf ("eigenvector = \n"); gsl_vector_fprintf (stdout, &evec_i.vector, "%g"); } } return 0; } Here is the beginning of the output from the program, $ ./a.out eigenvalue = 9.67023e-05 eigenvector = -0.0291933 0.328712 -0.791411 0.514553 ... This can be compared with the corresponding output from GNU OCTAVE, octave> [v,d] = eig(hilb(4)); octave> diag(d) ans = 9.6702e-05 6.7383e-03 1.6914e-01 1.5002e+00 octave> v v = 0.029193 0.179186 -0.582076 0.792608 -0.328712 -0.741918 0.370502 0.451923 0.791411 0.100228 0.509579 0.322416 -0.514553 0.638283 0.514048 0.252161 Note that the eigenvectors can differ by a change of sign, since the sign of an eigenvector is arbitrary.  File: gsl-ref.info, Node: Eigenvalue and Eigenvector References, Prev: Eigenvalue and Eigenvector Examples, Up: Eigensystems References and Further Reading ============================== Further information on the algorithms described in this section can be found in the following book, G. H. Golub, C. F. Van Loan, `Matrix Computations' (3rd Ed, 1996), Johns Hopkins University Press, ISBN 0-8018-5414-8. The LAPACK library is described in, `LAPACK Users' Guide' (Third Edition, 1999), Published by SIAM, ISBN 0-89871-447-8. `http://www.netlib.org/lapack' The LAPACK source code can be found at the website above along with an online copy of the users guide.  File: gsl-ref.info, Node: Fast Fourier Transforms, Next: Numerical Integration, Prev: Eigensystems, Up: Top Fast Fourier Transforms (FFTs) ****************************** This chapter describes functions for performing Fast Fourier Transforms (FFTs). The library includes radix-2 routines (for lengths which are a power of two) and mixed-radix routines (which work for any length). For efficiency there are separate versions of the routines for real data and for complex data. The mixed-radix routines are a reimplementation of the FFTPACK library of Paul Swarztrauber. Fortran code for FFTPACK is available on Netlib (FFTPACK also includes some routines for sine and cosine transforms but these are currently not available in GSL). For details and derivations of the underlying algorithms consult the document `GSL FFT Algorithms' (*note FFT References and Further Reading::) * Menu: * Mathematical Definitions:: * Overview of complex data FFTs:: * Radix-2 FFT routines for complex data:: * Mixed-radix FFT routines for complex data:: * Overview of real data FFTs:: * Radix-2 FFT routines for real data:: * Mixed-radix FFT routines for real data:: * FFT References and Further Reading::  File: gsl-ref.info, Node: Mathematical Definitions, Next: Overview of complex data FFTs, Up: Fast Fourier Transforms Mathematical Definitions ======================== Fast Fourier Transforms are efficient algorithms for calculating the discrete fourier transform (DFT), x_j = \sum_{k=0}^{N-1} z_k \exp(-2\pi i j k / N) The DFT usually arises as an approximation to the continuous fourier transform when functions are sampled at discrete intervals in space or time. The naive evaluation of the discrete fourier transform is a matrix-vector multiplication W\vec{z}. A general matrix-vector multiplication takes O(N^2) operations for N data-points. Fast fourier transform algorithms use a divide-and-conquer strategy to factorize the matrix W into smaller sub-matrices, corresponding to the integer factors of the length N. If N can be factorized into a product of integers f_1 f_2 ... f_n then the DFT can be computed in O(N \sum f_i) operations. For a radix-2 FFT this gives an operation count of O(N \log_2 N). All the FFT functions offer three types of transform: forwards, inverse and backwards, based on the same mathematical definitions. The definition of the "forward fourier transform", x = FFT(z), is, x_j = \sum_{k=0}^{N-1} z_k \exp(-2\pi i j k / N) and the definition of the "inverse fourier transform", x = IFFT(z), is, z_j = {1 \over N} \sum_{k=0}^{N-1} x_k \exp(2\pi i j k / N). The factor of 1/N makes this a true inverse. For example, a call to `gsl_fft_complex_forward' followed by a call to `gsl_fft_complex_inverse' should return the original data (within numerical errors). In general there are two possible choices for the sign of the exponential in the transform/ inverse-transform pair. GSL follows the same convention as FFTPACK, using a negative exponential for the forward transform. The advantage of this convention is that the inverse transform recreates the original function with simple fourier synthesis. Numerical Recipes uses the opposite convention, a positive exponential in the forward transform. The "backwards FFT" is simply our terminology for an unscaled version of the inverse FFT, z^{backwards}_j = \sum_{k=0}^{N-1} x_k \exp(2\pi i j k / N). When the overall scale of the result is unimportant it is often convenient to use the backwards FFT instead of the inverse to save unnecessary divisions.  File: gsl-ref.info, Node: Overview of complex data FFTs, Next: Radix-2 FFT routines for complex data, Prev: Mathematical Definitions, Up: Fast Fourier Transforms Overview of complex data FFTs ============================= The inputs and outputs for the complex FFT routines are "packed arrays" of floating point numbers. In a packed array the real and imaginary parts of each complex number are placed in alternate neighboring elements. For example, the following definition of a packed array of length 6, double x[3*2]; gsl_complex_packed_array data = x; can be used to hold an array of three complex numbers, `z[3]', in the following way, data[0] = Re(z[0]) data[1] = Im(z[0]) data[2] = Re(z[1]) data[3] = Im(z[1]) data[4] = Re(z[2]) data[5] = Im(z[2]) The array indices for the data have the same ordering as those in the definition of the DFT--i.e. there are no index transformations or permutations of the data. A "stride" parameter allows the user to perform transforms on the elements `z[stride*i]' instead of `z[i]'. A stride greater than 1 can be used to take an in-place FFT of the column of a matrix. A stride of 1 accesses the array without any additional spacing between elements. To perform an FFT on a vector argument, such as `gsl_complex_vector * v', use the following definitions (or their equivalents) when calling the functions described in this chapter: gsl_complex_packed_array data = v->data; size_t stride = v->stride; size_t n = v->size; For physical applications it is important to remember that the index appearing in the DFT does not correspond directly to a physical frequency. If the time-step of the DFT is \Delta then the frequency-domain includes both positive and negative frequencies, ranging from -1/(2\Delta) through 0 to +1/(2\Delta). The positive frequencies are stored from the beginning of the array up to the middle, and the negative frequencies are stored backwards from the end of the array. Here is a table which shows the layout of the array DATA, and the correspondence between the time-domain data z, and the frequency-domain data x. index z x = FFT(z) 0 z(t = 0) x(f = 0) 1 z(t = 1) x(f = 1/(N Delta)) 2 z(t = 2) x(f = 2/(N Delta)) . ........ .................. N/2 z(t = N/2) x(f = +1/(2 Delta), -1/(2 Delta)) . ........ .................. N-3 z(t = N-3) x(f = -3/(N Delta)) N-2 z(t = N-2) x(f = -2/(N Delta)) N-1 z(t = N-1) x(f = -1/(N Delta)) When N is even the location N/2 contains the most positive and negative frequencies (+1/(2 \Delta), -1/(2 \Delta)) which are equivalent. If N is odd then general structure of the table above still applies, but N/2 does not appear.  File: gsl-ref.info, Node: Radix-2 FFT routines for complex data, Next: Mixed-radix FFT routines for complex data, Prev: Overview of complex data FFTs, Up: Fast Fourier Transforms Radix-2 FFT routines for complex data ===================================== The radix-2 algorithms described in this section are simple and compact, although not necessarily the most efficient. They use the Cooley-Tukey algorithm to compute in-place complex FFTs for lengths which are a power of 2--no additional storage is required. The corresponding self-sorting mixed-radix routines offer better performance at the expense of requiring additional working space. All the functions described in this section are declared in the header file `gsl_fft_complex.h'. - Function: int gsl_fft_complex_radix2_forward (gsl_complex_packed_array DATA, size_t STRIDE, size_t N) - Function: int gsl_fft_complex_radix2_transform (gsl_complex_packed_array DATA, size_t STRIDE, size_t N, gsl_fft_direction SIGN) - Function: int gsl_fft_complex_radix2_backward (gsl_complex_packed_array DATA, size_t STRIDE, size_t N) - Function: int gsl_fft_complex_radix2_inverse (gsl_complex_packed_array DATA, size_t STRIDE, size_t N) These functions compute forward, backward and inverse FFTs of length N with stride STRIDE, on the packed complex array DATA using an in-place radix-2 decimation-in-time algorithm. The length of the transform N is restricted to powers of two. For the `transform' version of the function the SIGN argument can be either `forward' (-1) or `backward' (+1). The functions return a value of `GSL_SUCCESS' if no errors were detected, or `GSL_EDOM' if the length of the data N is not a power of two. - Function: int gsl_fft_complex_radix2_dif_forward (gsl_complex_packed_array DATA, size_t STRIDE, size_t N) - Function: int gsl_fft_complex_radix2_dif_transform (gsl_complex_packed_array DATA, size_t STRIDE, size_t N, gsl_fft_direction SIGN) - Function: int gsl_fft_complex_radix2_dif_backward (gsl_complex_packed_array DATA, size_t STRIDE, size_t N) - Function: int gsl_fft_complex_radix2_dif_inverse (gsl_complex_packed_array DATA, size_t STRIDE, size_t N) These are decimation-in-frequency versions of the radix-2 FFT functions. Here is an example program which computes the FFT of a short pulse in a sample of length 128. To make the resulting fourier transform real the pulse is defined for equal positive and negative times (-10 ... 10), where the negative times wrap around the end of the array. #include #include #include #include #define REAL(z,i) ((z)[2*(i)]) #define IMAG(z,i) ((z)[2*(i)+1]) int main (void) { int i; double data[2*128]; for (i = 0; i < 128; i++) { REAL(data,i) = 0.0; IMAG(data,i) = 0.0; } REAL(data,0) = 1.0; for (i = 1; i <= 10; i++) { REAL(data,i) = REAL(data,128-i) = 1.0; } for (i = 0; i < 128; i++) { printf ("%d %e %e\n", i, REAL(data,i), IMAG(data,i)); } printf ("\n"); gsl_fft_complex_radix2_forward (data, 1, 128); for (i = 0; i < 128; i++) { printf ("%d %e %e\n", i, REAL(data,i)/sqrt(128), IMAG(data,i)/sqrt(128)); } return 0; } Note that we have assumed that the program is using the default error handler (which calls `abort' for any errors). If you are not using a safe error handler you would need to check the return status of `gsl_fft_complex_radix2_forward'. The transformed data is rescaled by 1/\sqrt N so that it fits on the same plot as the input. Only the real part is shown, by the choice of the input data the imaginary part is zero. Allowing for the wrap-around of negative times at t=128, and working in units of k/N, the DFT approximates the continuum fourier transform, giving a modulated sine function.  File: gsl-ref.info, Node: Mixed-radix FFT routines for complex data, Next: Overview of real data FFTs, Prev: Radix-2 FFT routines for complex data, Up: Fast Fourier Transforms Mixed-radix FFT routines for complex data ========================================= This section describes mixed-radix FFT algorithms for complex data. The mixed-radix functions work for FFTs of any length. They are a reimplementation of Paul Swarztrauber's Fortran FFTPACK library. The theory is explained in the review article `Self-sorting Mixed-radix FFTs' by Clive Temperton. The routines here use the same indexing scheme and basic algorithms as FFTPACK. The mixed-radix algorithm is based on sub-transform modules--highly optimized small length FFTs which are combined to create larger FFTs. There are efficient modules for factors of 2, 3, 4, 5, 6 and 7. The modules for the composite factors of 4 and 6 are faster than combining the modules for 2*2 and 2*3. For factors which are not implemented as modules there is a fall-back to a general length-n module which uses Singleton's method for efficiently computing a DFT. This module is O(n^2), and slower than a dedicated module would be but works for any length n. Of course, lengths which use the general length-n module will still be factorized as much as possible. For example, a length of 143 will be factorized into 11*13. Large prime factors are the worst case scenario, e.g. as found in n=2*3*99991, and should be avoided because their O(n^2) scaling will dominate the run-time (consult the document `GSL FFT Algorithms' included in the GSL distribution if you encounter this problem). The mixed-radix initialization function `gsl_fft_complex_wavetable_alloc' returns the list of factors chosen by the library for a given length N. It can be used to check how well the length has been factorized, and estimate the run-time. To a first approximation the run-time scales as N \sum f_i, where the f_i are the factors of N. For programs under user control you may wish to issue a warning that the transform will be slow when the length is poorly factorized. If you frequently encounter data lengths which cannot be factorized using the existing small-prime modules consult `GSL FFT Algorithms' for details on adding support for other factors. All the functions described in this section are declared in the header file `gsl_fft_complex.h'. - Function: gsl_fft_complex_wavetable * gsl_fft_complex_wavetable_alloc (size_t N) This function prepares a trigonometric lookup table for a complex FFT of length N. The function returns a pointer to the newly allocated `gsl_fft_complex_wavetable' if no errors were detected, and a null pointer in the case of error. The length N is factorized into a product of subtransforms, and the factors and their trigonometric coefficients are stored in the wavetable. The trigonometric coefficients are computed using direct calls to `sin' and `cos', for accuracy. Recursion relations could be used to compute the lookup table faster, but if an application performs many FFTs of the same length then this computation is a one-off overhead which does not affect the final throughput. The wavetable structure can be used repeatedly for any transform of the same length. The table is not modified by calls to any of the other FFT functions. The same wavetable can be used for both forward and backward (or inverse) transforms of a given length. - Function: void gsl_fft_complex_wavetable_free (gsl_fft_complex_wavetable * WAVETABLE) This function frees the memory associated with the wavetable WAVETABLE. The wavetable can be freed if no further FFTs of the same length will be needed. These functions operate on a `gsl_fft_complex_wavetable' structure which contains internal parameters for the FFT. It is not necessary to set any of the components directly but it can sometimes be useful to examine them. For example, the chosen factorization of the FFT length is given and can be used to provide an estimate of the run-time or numerical error. The wavetable structure is declared in the header file `gsl_fft_complex.h'. - Data Type: gsl_fft_complex_wavetable This is a structure that holds the factorization and trigonometric lookup tables for the mixed radix fft algorithm. It has the following components: `size_t n' This is the number of complex data points `size_t nf' This is the number of factors that the length `n' was decomposed into. `size_t factor[64]' This is the array of factors. Only the first `nf' elements are used. `gsl_complex * trig' This is a pointer to a preallocated trigonometric lookup table of `n' complex elements. `gsl_complex * twiddle[64]' This is an array of pointers into `trig', giving the twiddle factors for each pass. The mixed radix algorithms require additional working space to hold the intermediate steps of the transform. - Function: gsl_fft_complex_workspace * gsl_fft_complex_workspace_alloc (size_t N) This function allocates a workspace for a complex transform of length N. - Function: void gsl_fft_complex_workspace_free (gsl_fft_complex_workspace * WORKSPACE) This function frees the memory associated with the workspace WORKSPACE. The workspace can be freed if no further FFTs of the same length will be needed. The following functions compute the transform, - Function: int gsl_fft_complex_forward (gsl_complex_packed_array DATA, size_t STRIDE, size_t N, const gsl_fft_complex_wavetable * WAVETABLE, gsl_fft_complex_workspace * WORK) - Function: int gsl_fft_complex_transform (gsl_complex_packed_array DATA, size_t STRIDE, size_t N, const gsl_fft_complex_wavetable * WAVETABLE, gsl_fft_complex_workspace * WORK, gsl_fft_direction SIGN) - Function: int gsl_fft_complex_backward (gsl_complex_packed_array DATA, size_t STRIDE, size_t N, const gsl_fft_complex_wavetable * WAVETABLE, gsl_fft_complex_workspace * WORK) - Function: int gsl_fft_complex_inverse (gsl_complex_packed_array DATA, size_t STRIDE, size_t N, const gsl_fft_complex_wavetable * WAVETABLE, gsl_fft_complex_workspace * WORK) These functions compute forward, backward and inverse FFTs of length N with stride STRIDE, on the packed complex array DATA, using a mixed radix decimation-in-frequency algorithm. There is no restriction on the length N. Efficient modules are provided for subtransforms of length 2, 3, 4, 5, 6 and 7. Any remaining factors are computed with a slow, O(n^2), general-n module. The caller must supply a WAVETABLE containing the trigonometric lookup tables and a workspace WORK. For the `transform' version of the function the SIGN argument can be either `forward' (-1) or `backward' (+1). The functions return a value of `0' if no errors were detected. The following `gsl_errno' conditions are defined for these functions: `GSL_EDOM' The length of the data N is not a positive integer (i.e. N is zero). `GSL_EINVAL' The length of the data N and the length used to compute the given WAVETABLE do not match. Here is an example program which computes the FFT of a short pulse in a sample of length 630 (=2*3*3*5*7) using the mixed-radix algorithm. #include #include #include #include #define REAL(z,i) ((z)[2*(i)]) #define IMAG(z,i) ((z)[2*(i)+1]) int main (void) { int i; const int n = 630; double data[2*n]; gsl_fft_complex_wavetable * wavetable; gsl_fft_complex_workspace * workspace; for (i = 0; i < n; i++) { REAL(data,i) = 0.0; IMAG(data,i) = 0.0; } data[0] = 1.0; for (i = 1; i <= 10; i++) { REAL(data,i) = REAL(data,n-i) = 1.0; } for (i = 0; i < n; i++) { printf ("%d: %e %e\n", i, REAL(data,i), IMAG(data,i)); } printf ("\n"); wavetable = gsl_fft_complex_wavetable_alloc (n); workspace = gsl_fft_complex_workspace_alloc (n); for (i = 0; i < wavetable->nf; i++) { printf ("# factor %d: %d\n", i, wavetable->factor[i]); } gsl_fft_complex_forward (data, 1, n, wavetable, workspace); for (i = 0; i < n; i++) { printf ("%d: %e %e\n", i, REAL(data,i), IMAG(data,i)); } gsl_fft_complex_wavetable_free (wavetable); gsl_fft_complex_workspace_free (workspace); return 0; } Note that we have assumed that the program is using the default `gsl' error handler (which calls `abort' for any errors). If you are not using a safe error handler you would need to check the return status of all the `gsl' routines.  File: gsl-ref.info, Node: Overview of real data FFTs, Next: Radix-2 FFT routines for real data, Prev: Mixed-radix FFT routines for complex data, Up: Fast Fourier Transforms Overview of real data FFTs ========================== The functions for real data are similar to those for complex data. However, there is an important difference between forward and inverse transforms. The fourier transform of a real sequence is not real. It is a complex sequence with a special symmetry: z_k = z_{N-k}^* A sequence with this symmetry is called "conjugate-complex" or "half-complex". This different structure requires different storage layouts for the forward transform (from real to half-complex) and inverse transform (from half-complex back to real). As a consequence the routines are divided into two sets: functions in `gsl_fft_real' which operate on real sequences and functions in `gsl_fft_halfcomplex' which operate on half-complex sequences. Functions in `gsl_fft_real' compute the frequency coefficients of a real sequence. The half-complex coefficients c of a real sequence x are given by fourier analysis, c_k = \sum_{j=0}^{N-1} x_j \exp(-2 \pi i j k /N) Functions in `gsl_fft_halfcomplex' compute inverse or backwards transforms. They reconstruct real sequences by fourier synthesis from their half-complex frequency coefficients, c, x_j = {1 \over N} \sum_{k=0}^{N-1} c_k \exp(2 \pi i j k /N) The symmetry of the half-complex sequence implies that only half of the complex numbers in the output need to be stored. The remaining half can be reconstructed using the half-complex symmetry condition. This works for all lengths, even and odd--when the length is even the middle value where k=N/2 is also real. Thus only N real numbers are required to store the half-complex sequence, and the transform of a real sequence can be stored in the same size array as the original data. The precise storage arrangements depend on the algorithm, and are different for radix-2 and mixed-radix routines. The radix-2 function operates in-place, which constrains the locations where each element can be stored. The restriction forces real and imaginary parts to be stored far apart. The mixed-radix algorithm does not have this restriction, and it stores the real and imaginary parts of a given term in neighboring locations (which is desirable for better locality of memory accesses).  File: gsl-ref.info, Node: Radix-2 FFT routines for real data, Next: Mixed-radix FFT routines for real data, Prev: Overview of real data FFTs, Up: Fast Fourier Transforms Radix-2 FFT routines for real data ================================== This section describes radix-2 FFT algorithms for real data. They use the Cooley-Tukey algorithm to compute in-place FFTs for lengths which are a power of 2. The radix-2 FFT functions for real data are declared in the header files `gsl_fft_real.h' - Function: int gsl_fft_real_radix2_transform (double DATA[], size_t STRIDE, size_t N) This function computes an in-place radix-2 FFT of length N and stride STRIDE on the real array DATA. The output is a half-complex sequence, which is stored in-place. The arrangement of the half-complex terms uses the following scheme: for k < N/2 the real part of the k-th term is stored in location k, and the corresponding imaginary part is stored in location N-k. Terms with k > N/2 can be reconstructed using the symmetry z_k = z^*_{N-k}. The terms for k=0 and k=N/2 are both purely real, and count as a special case. Their real parts are stored in locations 0 and N/2 respectively, while their imaginary parts which are zero are not stored. The following table shows the correspondence between the output DATA and the equivalent results obtained by considering the input data as a complex sequence with zero imaginary part, complex[0].real = data[0] complex[0].imag = 0 complex[1].real = data[1] complex[1].imag = data[N-1] ............... ................ complex[k].real = data[k] complex[k].imag = data[N-k] ............... ................ complex[N/2].real = data[N/2] complex[N/2].imag = 0 ............... ................ complex[k'].real = data[k] k' = N - k complex[k'].imag = -data[N-k] ............... ................ complex[N-1].real = data[1] complex[N-1].imag = -data[N-1] The radix-2 FFT functions for halfcomplex data are declared in the header file `gsl_fft_halfcomplex.h'. - Function: int gsl_fft_halfcomplex_radix2_inverse (double DATA[], size_t STRIDE, size_t N) - Function: int gsl_fft_halfcomplex_radix2_backward (double DATA[], size_t STRIDE, size_t N) These functions compute the inverse or backwards in-place radix-2 FFT of length N and stride STRIDE on the half-complex sequence DATA stored according the output scheme used by `gsl_fft_real_radix2'. The result is a real array stored in natural order.  File: gsl-ref.info, Node: Mixed-radix FFT routines for real data, Next: FFT References and Further Reading, Prev: Radix-2 FFT routines for real data, Up: Fast Fourier Transforms Mixed-radix FFT routines for real data ====================================== This section describes mixed-radix FFT algorithms for real data. The mixed-radix functions work for FFTs of any length. They are a reimplementation of the real-FFT routines in the Fortran FFTPACK library by Paul Swarztrauber. The theory behind the algorithm is explained in the article `Fast Mixed-Radix Real Fourier Transforms' by Clive Temperton. The routines here use the same indexing scheme and basic algorithms as FFTPACK. The functions use the FFTPACK storage convention for half-complex sequences. In this convention the half-complex transform of a real sequence is stored with frequencies in increasing order, starting at zero, with the real and imaginary parts of each frequency in neighboring locations. When a value is known to be real the imaginary part is not stored. The imaginary part of the zero-frequency component is never stored. It is known to be zero (since the zero frequency component is simply the sum of the input data (all real)). For a sequence of even length the imaginary part of the frequency n/2 is not stored either, since the symmetry z_k = z_{N-k}^* implies that this is purely real too. The storage scheme is best shown by some examples. The table below shows the output for an odd-length sequence, n=5. The two columns give the correspondence between the 5 values in the half-complex sequence returned by `gsl_fft_real_transform', HALFCOMPLEX[] and the values COMPLEX[] that would be returned if the same real input sequence were passed to `gsl_fft_complex_backward' as a complex sequence (with imaginary parts set to `0'), complex[0].real = halfcomplex[0] complex[0].imag = 0 complex[1].real = halfcomplex[1] complex[1].imag = halfcomplex[2] complex[2].real = halfcomplex[3] complex[2].imag = halfcomplex[4] complex[3].real = halfcomplex[3] complex[3].imag = -halfcomplex[4] complex[4].real = halfcomplex[1] complex[4].imag = -halfcomplex[2] The upper elements of the COMPLEX array, `complex[3]' and `complex[4]' are filled in using the symmetry condition. The imaginary part of the zero-frequency term `complex[0].imag' is known to be zero by the symmetry. The next table shows the output for an even-length sequence, n=6 In the even case there are two values which are purely real, complex[0].real = halfcomplex[0] complex[0].imag = 0 complex[1].real = halfcomplex[1] complex[1].imag = halfcomplex[2] complex[2].real = halfcomplex[3] complex[2].imag = halfcomplex[4] complex[3].real = halfcomplex[5] complex[3].imag = 0 complex[4].real = halfcomplex[3] complex[4].imag = -halfcomplex[4] complex[5].real = halfcomplex[1] complex[5].imag = -halfcomplex[2] The upper elements of the COMPLEX array, `complex[4]' and `complex[5]' are filled in using the symmetry condition. Both `complex[0].imag' and `complex[3].imag' are known to be zero. All these functions are declared in the header files `gsl_fft_real.h' and `gsl_fft_halfcomplex.h'. - Function: gsl_fft_real_wavetable * gsl_fft_real_wavetable_alloc (size_t N) - Function: gsl_fft_halfcomplex_wavetable * gsl_fft_halfcomplex_wavetable_alloc (size_t N) These functions prepare trigonometric lookup tables for an FFT of size n real elements. The functions return a pointer to the newly allocated struct if no errors were detected, and a null pointer in the case of error. The length N is factorized into a product of subtransforms, and the factors and their trigonometric coefficients are stored in the wavetable. The trigonometric coefficients are computed using direct calls to `sin' and `cos', for accuracy. Recursion relations could be used to compute the lookup table faster, but if an application performs many FFTs of the same length then computing the wavetable is a one-off overhead which does not affect the final throughput. The wavetable structure can be used repeatedly for any transform of the same length. The table is not modified by calls to any of the other FFT functions. The appropriate type of wavetable must be used for forward real or inverse half-complex transforms. - Function: void gsl_fft_real_wavetable_free (gsl_fft_real_wavetable * WAVETABLE) - Function: void gsl_fft_halfcomplex_wavetable_free (gsl_fft_halfcomplex_wavetable * WAVETABLE) These functions free the memory associated with the wavetable WAVETABLE. The wavetable can be freed if no further FFTs of the same length will be needed. The mixed radix algorithms require additional working space to hold the intermediate steps of the transform, - Function: gsl_fft_real_workspace * gsl_fft_real_workspace_alloc (size_t N) This function allocates a workspace for a real transform of length N. The same workspace can be used for both forward real and inverse halfcomplex transforms. - Function: void gsl_fft_real_workspace_free (gsl_fft_real_workspace * WORKSPACE) This function frees the memory associated with the workspace WORKSPACE. The workspace can be freed if no further FFTs of the same length will be needed. The following functions compute the transforms of real and half-complex data, - Function: int gsl_fft_real_transform (double DATA[], size_t STRIDE, size_t N, const gsl_fft_real_wavetable * WAVETABLE, gsl_fft_real_workspace * WORK) - Function: int gsl_fft_halfcomplex_transform (double DATA[], size_t STRIDE, size_t N, const gsl_fft_halfcomplex_wavetable * WAVETABLE, gsl_fft_real_workspace * WORK) These functions compute the FFT of DATA, a real or half-complex array of length N, using a mixed radix decimation-in-frequency algorithm. For `gsl_fft_real_transform' DATA is an array of time-ordered real data. For `gsl_fft_halfcomplex_transform' DATA contains fourier coefficients in the half-complex ordering described above. There is no restriction on the length N. Efficient modules are provided for subtransforms of length 2, 3, 4 and 5. Any remaining factors are computed with a slow, O(n^2), general-n module. The caller must supply a WAVETABLE containing trigonometric lookup tables and a workspace WORK. - Function: int gsl_fft_real_unpack (const double REAL_COEFFICIENT[], gsl_complex_packed_array COMPLEX_COEFFICIENT[], size_t STRIDE, size_t N) This function converts a single real array, REAL_COEFFICIENT into an equivalent complex array, COMPLEX_COEFFICIENT, (with imaginary part set to zero), suitable for `gsl_fft_complex' routines. The algorithm for the conversion is simply, for (i = 0; i < n; i++) { complex_coefficient[i].real = real_coefficient[i]; complex_coefficient[i].imag = 0.0; } - Function: int gsl_fft_halfcomplex_unpack (const double HALFCOMPLEX_COEFFICIENT[], gsl_complex_packed_array COMPLEX_COEFFICIENT, size_t STRIDE, size_t N) This function converts HALFCOMPLEX_COEFFICIENT, an array of half-complex coefficients as returned by `gsl_fft_real_transform', into an ordinary complex array, COMPLEX_COEFFICIENT. It fills in the complex array using the symmetry z_k = z_{N-k}^* to reconstruct the redundant elements. The algorithm for the conversion is, complex_coefficient[0].real = halfcomplex_coefficient[0]; complex_coefficient[0].imag = 0.0; for (i = 1; i < n - i; i++) { double hc_real = halfcomplex_coefficient[2 * i - 1]; double hc_imag = halfcomplex_coefficient[2 * i]; complex_coefficient[i].real = hc_real; complex_coefficient[i].imag = hc_imag; complex_coefficient[n - i].real = hc_real; complex_coefficient[n - i].imag = -hc_imag; } if (i == n - i) { complex_coefficient[i].real = halfcomplex_coefficient[n - 1]; complex_coefficient[i].imag = 0.0; } Here is an example program using `gsl_fft_real_transform' and `gsl_fft_halfcomplex_inverse'. It generates a real signal in the shape of a square pulse. The pulse is fourier transformed to frequency space, and all but the lowest ten frequency components are removed from the array of fourier coefficients returned by `gsl_fft_real_transform'. The remaining fourier coefficients are transformed back to the time-domain, to give a filtered version of the square pulse. Since fourier coefficients are stored using the half-complex symmetry both positive and negative frequencies are removed and the final filtered signal is also real. #include #include #include #include #include int main (void) { int i, n = 100; double data[n]; gsl_fft_real_wavetable * real; gsl_fft_halfcomplex_wavetable * hc; gsl_fft_real_workspace * work; for (i = 0; i < n; i++) { data[i] = 0.0; } for (i = n / 3; i < 2 * n / 3; i++) { data[i] = 1.0; } for (i = 0; i < n; i++) { printf ("%d: %e\n", i, data[i]); } printf ("\n"); work = gsl_fft_real_workspace_alloc (n); real = gsl_fft_real_wavetable_alloc (n); gsl_fft_real_transform (data, 1, n, real, work); gsl_fft_real_wavetable_free (real); for (i = 11; i < n; i++) { data[i] = 0; } hc = gsl_fft_halfcomplex_wavetable_alloc (n); gsl_fft_halfcomplex_inverse (data, 1, n, hc, work); gsl_fft_halfcomplex_wavetable_free (hc); for (i = 0; i < n; i++) { printf ("%d: %e\n", i, data[i]); } gsl_fft_real_workspace_free (work); return 0; }  File: gsl-ref.info, Node: FFT References and Further Reading, Prev: Mixed-radix FFT routines for real data, Up: Fast Fourier Transforms References and Further Reading ============================== A good starting point for learning more about the FFT is the review article `Fast Fourier Transforms: A Tutorial Review and A State of the Art' by Duhamel and Vetterli, P. Duhamel and M. Vetterli. Fast fourier transforms: A tutorial review and a state of the art. `Signal Processing', 19:259-299, 1990. To find out about the algorithms used in the GSL routines you may want to consult the document `GSL FFT Algorithms' (it is included in GSL, as `doc/fftalgorithms.tex'). This has general information on FFTs and explicit derivations of the implementation for each routine. There are also references to the relevant literature. For convenience some of the more important references are reproduced below. There are several introductory books on the FFT with example programs, such as `The Fast Fourier Transform' by Brigham and `DFT/FFT and Convolution Algorithms' by Burrus and Parks, E. Oran Brigham. `The Fast Fourier Transform'. Prentice Hall, 1974. C. S. Burrus and T. W. Parks. `DFT/FFT and Convolution Algorithms'. Wiley, 1984. Both these introductory books cover the radix-2 FFT in some detail. The mixed-radix algorithm at the heart of the FFTPACK routines is reviewed in Clive Temperton's paper, Clive Temperton. Self-sorting mixed-radix fast fourier transforms. `Journal of Computational Physics', 52(1):1-23, 1983. The derivation of FFTs for real-valued data is explained in the following two articles, Henrik V. Sorenson, Douglas L. Jones, Michael T. Heideman, and C. Sidney Burrus. Real-valued fast fourier transform algorithms. `IEEE Transactions on Acoustics, Speech, and Signal Processing', ASSP-35(6):849-863, 1987. Clive Temperton. Fast mixed-radix real fourier transforms. `Journal of Computational Physics', 52:340-350, 1983. In 1979 the IEEE published a compendium of carefully-reviewed Fortran FFT programs in `Programs for Digital Signal Processing'. It is a useful reference for implementations of many different FFT algorithms, Digital Signal Processing Committee and IEEE Acoustics, Speech, and Signal Processing Committee, editors. `Programs for Digital Signal Processing'. IEEE Press, 1979. For large-scale FFT work we recommend the use of the dedicated FFTW library by Frigo and Johnson. The FFTW library is self-optimizing--it automatically tunes itself for each hardware platform in order to achieve maximum performance. It is available under the GNU GPL. FFTW Website, `http://www.fftw.org/' The source code for FFTPACK is available from Netlib, FFTPACK, `http://www.netlib.org/fftpack/'  File: gsl-ref.info, Node: Numerical Integration, Next: Random Number Generation, Prev: Fast Fourier Transforms, Up: Top Numerical Integration ********************* This chapter describes routines for performing numerical integration (quadrature) of a function in one dimension. There are routines for adaptive and non-adaptive integration of general functions, with specialised routines for specific cases. These include integration over infinite and semi-infinite ranges, singular integrals, including logarithmic singularities, computation of Cauchy principal values and oscillatory integrals. The library reimplements the algorithms used in QUADPACK, a numerical integration package written by Piessens, Doncker-Kapenga, Uberhuber and Kahaner. Fortran code for QUADPACK is available on Netlib. The functions described in this chapter are declared in the header file `gsl_integration.h'. * Menu: * Numerical Integration Introduction:: * QNG non-adaptive Gauss-Kronrod integration:: * QAG adaptive integration:: * QAGS adaptive integration with singularities:: * QAGP adaptive integration with known singular points:: * QAGI adaptive integration on infinite intervals:: * QAWC adaptive integration for Cauchy principal values:: * QAWS adaptive integration for singular functions:: * QAWO adaptive integration for oscillatory functions:: * QAWF adaptive integration for Fourier integrals:: * Numerical integration error codes:: * Numerical integration examples:: * Numerical integration References and Further Reading::  File: gsl-ref.info, Node: Numerical Integration Introduction, Next: QNG non-adaptive Gauss-Kronrod integration, Up: Numerical Integration Introduction ============ Each algorithm computes an approximation to a definite integral of the form, I = \int_a^b f(x) w(x) dx where w(x) is a weight function (for general integrands w(x)=1). The user provides absolute and relative error bounds (epsabs, epsrel) which specify the following accuracy requirement, |RESULT - I| <= max(epsabs, epsrel |I|) where RESULT is the numerical approximation obtained by the algorithm. The algorithms attempt to estimate the absolute error ABSERR = |RESULT - I| in such a way that the following inequality holds, |RESULT - I| <= ABSERR <= max(epsabs, epsrel |I|) The routines will fail to converge if the error bounds are too stringent, but always return the best approximation obtained up to that stage. The algorithms in QUADPACK use a naming convention based on the following letters, `Q' - quadrature routine `N' - non-adaptive integrator `A' - adaptive integrator `G' - general integrand (user-defined) `W' - weight function with integrand `S' - singularities can be more readily integrated `P' - points of special difficulty can be supplied `I' - infinite range of integration `O' - oscillatory weight function, cos or sin `F' - Fourier integral `C' - Cauchy principal value The algorithms are built on pairs of quadrature rules, a higher order rule and a lower order rule. The higher order rule is used to compute the best approximation to an integral over a small range. The difference between the results of the higher order rule and the lower order rule gives an estimate of the error in the approximation. * Menu: * Integrands without weight functions:: * Integrands with weight functions:: * Integrands with singular weight functions::  File: gsl-ref.info, Node: Integrands without weight functions, Next: Integrands with weight functions, Up: Numerical Integration Introduction Integrands without weight functions ----------------------------------- The algorithms for general functions (without a weight function) are based on Gauss-Kronrod rules. A Gauss-Kronrod rule begins with a classical Gaussian quadrature rule of order m. This is extended with additional points between each of the abscissae to give a higher order Kronrod rule of order 2m+1. The Kronrod rule is efficient because it reuses existing function evaluations from the Gaussian rule. The higher order Kronrod rule is used as the best approximation to the integral, and the difference between the two rules is used as an estimate of the error in the approximation.  File: gsl-ref.info, Node: Integrands with weight functions, Next: Integrands with singular weight functions, Prev: Integrands without weight functions, Up: Numerical Integration Introduction Integrands with weight functions -------------------------------- For integrands with weight functions the algorithms use Clenshaw-Curtis quadrature rules. A Clenshaw-Curtis rule begins with an n-th order Chebyshev polynomial approximation to the integrand. This polynomial can be integrated exactly to give an approximation to the integral of the original function. The Chebyshev expansion can be extended to higher orders to improve the approximation and provide an estimate of the error.  File: gsl-ref.info, Node: Integrands with singular weight functions, Prev: Integrands with weight functions, Up: Numerical Integration Introduction Integrands with singular weight functions ----------------------------------------- The presence of singularities (or other behavior) in the integrand can cause slow convergence in the Chebyshev approximation. The modified Clenshaw-Curtis rules used in QUADPACK separate out several common weight functions which cause slow convergence. These weight functions are integrated analytically against the Chebyshev polynomials to precompute "modified Chebyshev moments". Combining the moments with the Chebyshev approximation to the function gives the desired integral. The use of analytic integration for the singular part of the function allows exact cancellations and substantially improves the overall convergence behavior of the integration.  File: gsl-ref.info, Node: QNG non-adaptive Gauss-Kronrod integration, Next: QAG adaptive integration, Prev: Numerical Integration Introduction, Up: Numerical Integration QNG non-adaptive Gauss-Kronrod integration ========================================== The QNG algorithm is a non-adaptive procedure which uses fixed Gauss-Kronrod abscissae to sample the integrand at a maximum of 87 points. It is provided for fast integration of smooth functions. - Function: int gsl_integration_qng (const gsl_function * F, double A, double B, double EPSABS, double EPSREL, double * RESULT, double * ABSERR, size_t * NEVAL) This function applies the Gauss-Kronrod 10-point, 21-point, 43-point and 87-point integration rules in succession until an estimate of the integral of f over (a,b) is achieved within the desired absolute and relative error limits, EPSABS and EPSREL. The function returns the final approximation, RESULT, an estimate of the absolute error, ABSERR and the number of function evaluations used, NEVAL. The Gauss-Kronrod rules are designed in such a way that each rule uses all the results of its predecessors, in order to minimize the total number of function evaluations.  File: gsl-ref.info, Node: QAG adaptive integration, Next: QAGS adaptive integration with singularities, Prev: QNG non-adaptive Gauss-Kronrod integration, Up: Numerical Integration QAG adaptive integration ======================== The QAG algorithm is a simple adaptive integration procedure. The integration region is divided into subintervals, and on each iteration the subinterval with the largest estimated error is bisected. This reduces the overall error rapidly, as the subintervals become concentrated around local difficulties in the integrand. These subintervals are managed by a `gsl_integration_workspace' struct, which handles the memory for the subinterval ranges, results and error estimates. - Function: gsl_integration_workspace * gsl_integration_workspace_alloc (size_t N) This function allocates a workspace sufficient to hold N double precision intervals, their integration results and error estimates. - Function: void gsl_integration_workspace_free (gsl_integration_workspace * W) This function frees the memory associated with the workspace W. - Function: int gsl_integration_qag (const gsl_function * F, double A, double B, double EPSABS, double EPSREL, size_t LIMIT, int KEY, gsl_integration_workspace * WORKSPACE, double * RESULT, double * ABSERR) This function applies an integration rule adaptively until an estimate of the integral of f over (a,b) is achieved within the desired absolute and relative error limits, EPSABS and EPSREL. The function returns the final approximation, RESULT, and an estimate of the absolute error, ABSERR. The integration rule is determined by the value of KEY, which should be chosen from the following symbolic names, GSL_INTEG_GAUSS15 (key = 1) GSL_INTEG_GAUSS21 (key = 2) GSL_INTEG_GAUSS31 (key = 3) GSL_INTEG_GAUSS41 (key = 4) GSL_INTEG_GAUSS51 (key = 5) GSL_INTEG_GAUSS61 (key = 6) corresponding to the 15, 21, 31, 41, 51 and 61 point Gauss-Kronrod rules. The higher-order rules give better accuracy for smooth functions, while lower-order rules save time when the function contains local difficulties, such as discontinuities. On each iteration the adaptive integration strategy bisects the interval with the largest error estimate. The subintervals and their results are stored in the memory provided by WORKSPACE. The maximum number of subintervals is given by LIMIT, which may not exceed the allocated size of the workspace.  File: gsl-ref.info, Node: QAGS adaptive integration with singularities, Next: QAGP adaptive integration with known singular points, Prev: QAG adaptive integration, Up: Numerical Integration QAGS adaptive integration with singularities ============================================ The presence of an integrable singularity in the integration region causes an adaptive routine to concentrate new subintervals around the singularity. As the subintervals decrease in size the successive approximations to the integral converge in a limiting fashion. This approach to the limit can be accelerated using an extrapolation procedure. The QAGS algorithm combines adaptive bisection with the Wynn epsilon-algorithm to speed up the integration of many types of integrable singularities. - Function: int gsl_integration_qags (const gsl_function * F, double A, double B, double EPSABS, double EPSREL, size_t LIMIT, gsl_integration_workspace * WORKSPACE, double * RESULT, double * ABSERR) This function applies the Gauss-Kronrod 21-point integration rule adaptively until an estimate of the integral of f over (a,b) is achieved within the desired absolute and relative error limits, EPSABS and EPSREL. The results are extrapolated using the epsilon-algorithm, which accelerates the convergence of the integral in the presence of discontinuities and integrable singularities. The function returns the final approximation from the extrapolation, RESULT, and an estimate of the absolute error, ABSERR. The subintervals and their results are stored in the memory provided by WORKSPACE. The maximum number of subintervals is given by LIMIT, which may not exceed the allocated size of the workspace.  File: gsl-ref.info, Node: QAGP adaptive integration with known singular points, Next: QAGI adaptive integration on infinite intervals, Prev: QAGS adaptive integration with singularities, Up: Numerical Integration QAGP adaptive integration with known singular points ==================================================== - Function: int gsl_integration_qagp (const gsl_function * F, double * PTS, size_t NPTS, double EPSABS, double EPSREL, size_t LIMIT, gsl_integration_workspace * WORKSPACE, double * RESULT, double * ABSERR) This function applies the adaptive integration algorithm QAGS taking account of the user-supplied locations of singular points. The array PTS of length NPTS should contain the endpoints of the integration ranges defined by the integration region and locations of the singularities. For example, to integrate over the region (a,b) with break-points at x_1, x_2, x_3 (where a < x_1 < x_2 < x_3 < b) the following PTS array should be used pts[0] = a pts[1] = x_1 pts[2] = x_2 pts[3] = x_3 pts[4] = b with NPTS = 5. If you know the locations of the singular points in the integration region then this routine will be faster than `QAGS'.  File: gsl-ref.info, Node: QAGI adaptive integration on infinite intervals, Next: QAWC adaptive integration for Cauchy principal values, Prev: QAGP adaptive integration with known singular points, Up: Numerical Integration QAGI adaptive integration on infinite intervals =============================================== - Function: int gsl_integration_qagi (gsl_function * F, double EPSABS, double EPSREL, size_t LIMIT, gsl_integration_workspace * WORKSPACE, double * RESULT, double * ABSERR) This function computes the integral of the function F over the infinite interval (-\infty,+\infty). The integral is mapped onto the semi-open interval (0,1] using the transformation x = (1-t)/t, \int_{-\infty}^{+\infty} dx f(x) = \int_0^1 dt (f((1-t)/t) + f((-1+t)/t))/t^2. It is then integrated using the QAGS algorithm. The normal 21-point Gauss-Kronrod rule of QAGS is replaced by a 15-point rule, because the transformation can generate an integrable singularity at the origin. In this case a lower-order rule is more efficient. - Function: int gsl_integration_qagiu (gsl_function * F, double A, double EPSABS, double EPSREL, size_t LIMIT, gsl_integration_workspace * WORKSPACE, double * RESULT, double * ABSERR) This function computes the integral of the function F over the semi-infinite interval (a,+\infty). The integral is mapped onto the semi-open interval (0,1] using the transformation x = a + (1-t)/t, \int_{a}^{+\infty} dx f(x) = \int_0^1 dt f(a + (1-t)/t)/t^2 and then integrated using the QAGS algorithm. - Function: int gsl_integration_qagil (gsl_function * F, double B, double EPSABS, double EPSREL, size_t LIMIT, gsl_integration_workspace * WORKSPACE, double * RESULT, double * ABSERR) This function computes the integral of the function F over the semi-infinite interval (-\infty,b). The integral is mapped onto the semi-open interval (0,1] using the transformation x = b - (1-t)/t, \int_{+\infty}^{b} dx f(x) = \int_0^1 dt f(b - (1-t)/t)/t^2 and then integrated using the QAGS algorithm.  File: gsl-ref.info, Node: QAWC adaptive integration for Cauchy principal values, Next: QAWS adaptive integration for singular functions, Prev: QAGI adaptive integration on infinite intervals, Up: Numerical Integration QAWC adaptive integration for Cauchy principal values ===================================================== - Function: int gsl_integration_qawc (gsl_function * F, double A, double B, double C, double EPSABS, double EPSREL, size_t LIMIT, gsl_integration_workspace * WORKSPACE, double * RESULT, double * ABSERR) This function computes the Cauchy principal value of the integral of f over (a,b), with a singularity at C, I = \int_a^b dx f(x) / (x - c) The adaptive bisection algorithm of QAG is used, with modifications to ensure that subdivisions do not occur at the singular point x = c. When a subinterval contains the point x = c or is close to it then a special 25-point modified Clenshaw-Curtis rule is used to control the singularity. Further away from the singularity the algorithm uses an ordinary 15-point Gauss-Kronrod integration rule.  File: gsl-ref.info, Node: QAWS adaptive integration for singular functions, Next: QAWO adaptive integration for oscillatory functions, Prev: QAWC adaptive integration for Cauchy principal values, Up: Numerical Integration QAWS adaptive integration for singular functions ================================================ The QAWS algorithm is designed for integrands with algebraic-logarithmic singularities at the end-points of an integration region. In order to work efficiently the algorithm requires a precomputed table of Chebyshev moments. - Function: gsl_integration_qaws_table * gsl_integration_qaws_table_alloc (double ALPHA, double BETA, int MU, int NU) This function allocates space for a `gsl_integration_qaws_table' struct and associated workspace describing a singular weight function W(x) with the parameters (\alpha, \beta, \mu, \nu), W(x) = (x-a)^alpha (b-x)^beta log^mu (x-a) log^nu (b-x) where \alpha > -1, \beta > -1, and \mu = 0, 1, \nu = 0, 1. The weight function can take four different forms depending on the values of \mu and \nu, W(x) = (x-a)^alpha (b-x)^beta (mu = 0, nu = 0) W(x) = (x-a)^alpha (b-x)^beta log(x-a) (mu = 1, nu = 0) W(x) = (x-a)^alpha (b-x)^beta log(b-x) (mu = 0, nu = 1) W(x) = (x-a)^alpha (b-x)^beta log(x-a) log(b-x) (mu = 1, nu = 1) The singular points (a,b) do not have to be specified until the integral is computed, where they are the endpoints of the integration range. The function returns a pointer to the newly allocated `gsl_integration_qaws_table' if no errors were detected, and 0 in the case of error. - Function: int gsl_integration_qaws_table_set (gsl_integration_qaws_table * T, double ALPHA, double BETA, int MU, int NU) This function modifies the parameters (\alpha, \beta, \mu, \nu) of an existing `gsl_integration_qaws_table' struct T. - Function: void gsl_integration_qaws_table_free (gsl_integration_qaws_table * T) This function frees all the memory associated with the `gsl_integration_qaws_table' struct T. - Function: int gsl_integration_qaws (gsl_function * F, const double A, const double B, gsl_integration_qaws_table * T, const double EPSABS, const double EPSREL, const size_t LIMIT, gsl_integration_workspace * WORKSPACE, double * RESULT, double * ABSERR) This function computes the integral of the function f(x) over the interval (a,b) with the singular weight function (x-a)^\alpha (b-x)^\beta \log^\mu (x-a) \log^\nu (b-x). The parameters of the weight function (\alpha, \beta, \mu, \nu) are taken from the table T. The integral is, I = \int_a^b dx f(x) (x-a)^alpha (b-x)^beta log^mu (x-a) log^nu (b-x). The adaptive bisection algorithm of QAG is used. When a subinterval contains one of the endpoints then a special 25-point modified Clenshaw-Curtis rule is used to control the singularities. For subintervals which do not include the endpoints an ordinary 15-point Gauss-Kronrod integration rule is used.  File: gsl-ref.info, Node: QAWO adaptive integration for oscillatory functions, Next: QAWF adaptive integration for Fourier integrals, Prev: QAWS adaptive integration for singular functions, Up: Numerical Integration QAWO adaptive integration for oscillatory functions =================================================== The QAWO algorithm is designed for integrands with an oscillatory factor, \sin(\omega x) or \cos(\omega x). In order to work efficiently the algorithm requires a table of Chebyshev moments which must be pre-computed with calls to the functions below. - Function: gsl_integration_qawo_table * gsl_integration_qawo_table_alloc (double OMEGA, double L, enum gsl_integration_qawo_enum SINE, size_t N) This function allocates space for a `gsl_integration_qawo_table' struct and its associated workspace describing a sine or cosine weight function W(x) with the parameters (\omega, L), W(x) = sin(omega x) W(x) = cos(omega x) The parameter L must be the length of the interval over which the function will be integrated L = b - a. The choice of sine or cosine is made with the parameter SINE which should be chosen from one of the two following symbolic values: GSL_INTEG_COSINE GSL_INTEG_SINE The `gsl_integration_qawo_table' is a table of the trigonometric coefficients required in the integration process. The parameter N determines the number of levels of coefficients that are computed. Each level corresponds to one bisection of the interval L, so that N levels are sufficient for subintervals down to the length L/2^n. The integration routine `gsl_integration_qawo' returns the error `GSL_ETABLE' if the number of levels is insufficient for the requested accuracy. - Function: int gsl_integration_qawo_table_set (gsl_integration_qawo_table * T, double OMEGA, double L, enum gsl_integration_qawo_enum SINE) This function changes the parameters OMEGA, L and SINE of the existing workspace T. - Function: int gsl_integration_qawo_table_set_length (gsl_integration_qawo_table * T, double L) This function allows the length parameter L of the workspace T to be changed. - Function: void gsl_integration_qawo_table_free (gsl_integration_qawo_table * T) This function frees all the memory associated with the workspace T. - Function: int gsl_integration_qawo (gsl_function * F, const double A, const double EPSABS, const double EPSREL, const size_t LIMIT, gsl_integration_workspace * WORKSPACE, gsl_integration_qawo_table * WF, double * RESULT, double * ABSERR) This function uses an adaptive algorithm to compute the integral of f over (a,b) with the weight function \sin(\omega x) or \cos(\omega x) defined by the table WF, I = \int_a^b dx f(x) sin(omega x) I = \int_a^b dx f(x) cos(omega x) The results are extrapolated using the epsilon-algorithm to accelerate the convergence of the integral. The function returns the final approximation from the extrapolation, RESULT, and an estimate of the absolute error, ABSERR. The subintervals and their results are stored in the memory provided by WORKSPACE. The maximum number of subintervals is given by LIMIT, which may not exceed the allocated size of the workspace. Those subintervals with "large" widths d where d\omega > 4 are computed using a 25-point Clenshaw-Curtis integration rule, which handles the oscillatory behavior. Subintervals with a "small" widths where d\omega < 4 are computed using a 15-point Gauss-Kronrod integration.  File: gsl-ref.info, Node: QAWF adaptive integration for Fourier integrals, Next: Numerical integration error codes, Prev: QAWO adaptive integration for oscillatory functions, Up: Numerical Integration QAWF adaptive integration for Fourier integrals =============================================== - Function: int gsl_integration_qawf (gsl_function * F, const double A, const double EPSABS, const size_t LIMIT, gsl_integration_workspace * WORKSPACE, gsl_integration_workspace * CYCLE_WORKSPACE, gsl_integration_qawo_table * WF, double * RESULT, double * ABSERR) This function attempts to compute a Fourier integral of the function F over the semi-infinite interval [a,+\infty). I = \int_a^{+\infty} dx f(x) sin(omega x) I = \int_a^{+\infty} dx f(x) cos(omega x) The parameter \omega and choice of \sin or \cos is taken from the table WF (the length L can take any value, since it is overridden by this function to a value appropriate for the fourier integration). The integral is computed using the QAWO algorithm over each of the subintervals, C_1 = [a, a + c] C_2 = [a + c, a + 2 c] ... = ... C_k = [a + (k-1) c, a + k c] where c = (2 floor(|\omega|) + 1) \pi/|\omega|. The width c is chosen to cover an odd number of periods so that the contributions from the intervals alternate in sign and are monotonically decreasing when F is positive and monotonically decreasing. The sum of this sequence of contributions is accelerated using the epsilon-algorithm. This function works to an overall absolute tolerance of ABSERR. The following strategy is used: on each interval C_k the algorithm tries to achieve the tolerance TOL_k = u_k abserr where u_k = (1 - p)p^{k-1} and p = 9/10. The sum of the geometric series of contributions from each interval gives an overall tolerance of ABSERR. If the integration of a subinterval leads to difficulties then the accuracy requirement for subsequent intervals is relaxed, TOL_k = u_k max(abserr, max_{i #include #include double f (double x, void * params) { double alpha = *(double *) params; double f = log(alpha*x) / sqrt(x); return f; } int main (void) { gsl_integration_workspace * w = gsl_integration_workspace_alloc (1000); double result, error; double expected = -4.0; double alpha = 1.0; gsl_function F; F.function = &f; F.params = α gsl_integration_qags (&F, 0, 1, 0, 1e-7, 1000, w, &result, &error); printf ("result = % .18f\n", result); printf ("exact result = % .18f\n", expected); printf ("estimated error = % .18f\n", error); printf ("actual error = % .18f\n", result - expected); printf ("intervals = %d\n", w->size); return 0; } The results below show that the desired accuracy is achieved after 8 subdivisions. $ ./a.out result = -3.999999999999973799 exact result = -4.000000000000000000 estimated error = 0.000000000000246025 actual error = 0.000000000000026201 intervals = 8 In fact, the extrapolation procedure used by `QAGS' produces an accuracy of almost twice as many digits. The error estimate returned by the extrapolation procedure is larger than the actual error, giving a margin of safety of one order of magnitude.  File: gsl-ref.info, Node: Numerical integration References and Further Reading, Prev: Numerical integration examples, Up: Numerical Integration References and Further Reading ============================== The following book is the definitive reference for QUADPACK, and was written by the original authors. It provides descriptions of the algorithms, program listings, test programs and examples. It also includes useful advice on numerical integration and many references to the numerical integration literature used in developing QUADPACK. R. Piessens, E. de Doncker-Kapenga, C.W. Uberhuber, D.K. Kahaner. `QUADPACK A subroutine package for automatic integration' Springer Verlag, 1983.  File: gsl-ref.info, Node: Random Number Generation, Next: Quasi-Random Sequences, Prev: Numerical Integration, Up: Top Random Number Generation ************************ The library provides a large collection of random number generators which can be accessed through a uniform interface. Environment variables allow you to select different generators and seeds at runtime, so that you can easily switch between generators without needing to recompile your program. Each instance of a generator keeps track of its own state, allowing the generators to be used in multi-threaded programs. Additional functions are available for transforming uniform random numbers into samples from continuous or discrete probability distributions such as the Gaussian, log-normal or Poisson distributions. These functions are declared in the header file `gsl_rng.h'. * Menu: * General comments on random numbers:: * The Random Number Generator Interface:: * Random number generator initialization:: * Sampling from a random number generator:: * Auxiliary random number generator functions:: * Random number environment variables:: * Copying random number generator state:: * Reading and writing random number generator state:: * Random number generator algorithms:: * Unix random number generators:: * Other random number generators:: * Random Number Generator Performance:: * Random Number Generator Examples:: * Random Number References and Further Reading:: * Random Number Acknowledgements::  File: gsl-ref.info, Node: General comments on random numbers, Next: The Random Number Generator Interface, Up: Random Number Generation General comments on random numbers ================================== In 1988, Park and Miller wrote a paper entitled "Random number generators: good ones are hard to find." [Commun. ACM, 31, 1192-1201]. Fortunately, some excellent random number generators are available, though poor ones are still in common use. You may be happy with the system-supplied random number generator on your computer, but you should be aware that as computers get faster, requirements on random number generators increase. Nowadays, a simulation that calls a random number generator millions of times can often finish before you can make it down the hall to the coffee machine and back. A very nice review of random number generators was written by Pierre L'Ecuyer, as Chapter 4 of the book: Handbook on Simulation, Jerry Banks, ed. (Wiley, 1997). The chapter is available in postscript from L'Ecuyer's ftp site (see references). Knuth's volume on Seminumerical Algorithms (originally published in 1968) devotes 170 pages to random number generators, and has recently been updated in its 3rd edition (1997). It is brilliant, a classic. If you don't own it, you should stop reading right now, run to the nearest bookstore, and buy it. A good random number generator will satisfy both theoretical and statistical properties. Theoretical properties are often hard to obtain (they require real math!), but one prefers a random number generator with a long period, low serial correlation, and a tendency _not_ to "fall mainly on the planes." Statistical tests are performed with numerical simulations. Generally, a random number generator is used to estimate some quantity for which the theory of probability provides an exact answer. Comparison to this exact answer provides a measure of "randomness".  File: gsl-ref.info, Node: The Random Number Generator Interface, Next: Random number generator initialization, Prev: General comments on random numbers, Up: Random Number Generation The Random Number Generator Interface ===================================== It is important to remember that a random number generator is not a "real" function like sine or cosine. Unlike real functions, successive calls to a random number generator yield different return values. Of course that is just what you want for a random number generator, but to achieve this effect, the generator must keep track of some kind of "state" variable. Sometimes this state is just an integer (sometimes just the value of the previously generated random number), but often it is more complicated than that and may involve a whole array of numbers, possibly with some indices thrown in. To use the random number generators, you do not need to know the details of what comprises the state, and besides that varies from algorithm to algorithm. The random number generator library uses two special structs, `gsl_rng_type' which holds static information about each type of generator and `gsl_rng' which describes an instance of a generator created from a given `gsl_rng_type'. The functions described in this section are declared in the header file `gsl_rng.h'.  File: gsl-ref.info, Node: Random number generator initialization, Next: Sampling from a random number generator, Prev: The Random Number Generator Interface, Up: Random Number Generation Random number generator initialization ====================================== - Function: gsl_rng * gsl_rng_alloc (const gsl_rng_type * T) This function returns a pointer to a newly-created instance of a random number generator of type T. For example, the following code creates an instance of the Tausworthe generator, gsl_rng * r = gsl_rng_alloc (gsl_rng_taus); If there is insufficient memory to create the generator then the function returns a null pointer and the error handler is invoked with an error code of `GSL_ENOMEM'. The generator is automatically initialized with the default seed, `gsl_rng_default_seed'. This is zero by default but can be changed either directly or by using the environment variable `GSL_RNG_SEED' (*note Random number environment variables::). The details of the available generator types are described later in this chapter. - Function: void gsl_rng_set (const gsl_rng * R, unsigned long int S) This function initializes (or `seeds') the random number generator. If the generator is seeded with the same value of S on two different runs, the same stream of random numbers will be generated by successive calls to the routines below. If different values of S are supplied, then the generated streams of random numbers should be completely different. If the seed S is zero then the standard seed from the original implementation is used instead. For example, the original Fortran source code for the `ranlux' generator used a seed of 314159265, and so choosing S equal to zero reproduces this when using `gsl_rng_ranlux'. - Function: void gsl_rng_free (gsl_rng * R) This function frees all the memory associated with the generator R.  File: gsl-ref.info, Node: Sampling from a random number generator, Next: Auxiliary random number generator functions, Prev: Random number generator initialization, Up: Random Number Generation Sampling from a random number generator ======================================= The following functions return uniformly distributed random numbers, either as integers or double precision floating point numbers. To obtain non-uniform distributions *note Random Number Distributions::. - Function: unsigned long int gsl_rng_get (const gsl_rng * R) This function returns a random integer from the generator R. The minimum and maximum values depend on the algorithm used, but all integers in the range [MIN,MAX] are equally likely. The values of MIN and MAX can determined using the auxiliary functions `gsl_rng_max (r)' and `gsl_rng_min (r)'. - Function: double gsl_rng_uniform (const gsl_rng * R) This function returns a double precision floating point number uniformly distributed in the range [0,1). The range includes 0.0 but excludes 1.0. The value is typically obtained by dividing the result of `gsl_rng_get(r)' by `gsl_rng_max(r) + 1.0' in double precision. Some generators compute this ratio internally so that they can provide floating point numbers with more than 32 bits of randomness (the maximum number of bits that can be portably represented in a single `unsigned long int'). - Function: double gsl_rng_uniform_pos (const gsl_rng * R) This function returns a positive double precision floating point number uniformly distributed in the range (0,1), excluding both 0.0 and 1.0. The number is obtained by sampling the generator with the algorithm of `gsl_rng_uniform' until a non-zero value is obtained. You can use this function if you need to avoid a singularity at 0.0. - Function: unsigned long int gsl_rng_uniform_int (const gsl_rng * R, unsigned long int N) This function returns a random integer from 0 to n-1 inclusive by scaling down and/or discarding samples from the generator R. All integers in the range [0,n-1] are produced with equal probability. For generators with a non-zero minimum value an offset is applied so that zero is returned with the correct probability. Note that this function is designed for sampling from ranges smaller than the range of the underlying generator. The parameter N must be less than or equal to the range of the generator R. If N is larger than the range of the generator then the function calls the error handler with an error code of `GSL_EINVAL' and returns zero. In particular, this function is not intended for generating the full range of unsigned integer values [0,2^32-1]. Instead choose a generator with the maximal integer range and zero mimimum value, such as `gsl_rng_ranlxd1', `gsl_rng_mt19937' or `gsl_rng_taus', and sample it directly using `gsl_rng_get()'. The range of each generator can be found using the auxiliary functions described in the next section.  File: gsl-ref.info, Node: Auxiliary random number generator functions, Next: Random number environment variables, Prev: Sampling from a random number generator, Up: Random Number Generation Auxiliary random number generator functions =========================================== The following functions provide information about an existing generator. You should use them in preference to hard-coding the generator parameters into your own code. - Function: const char * gsl_rng_name (const gsl_rng * R) This function returns a pointer to the name of the generator. For example, printf ("r is a '%s' generator\n", gsl_rng_name (r)); would print something like `r is a 'taus' generator'. - Function: unsigned long int gsl_rng_max (const gsl_rng * R) `gsl_rng_max' returns the largest value that `gsl_rng_get' can return. - Function: unsigned long int gsl_rng_min (const gsl_rng * R) `gsl_rng_min' returns the smallest value that `gsl_rng_get' can return. Usually this value is zero. There are some generators with algorithms that cannot return zero, and for these generators the minimum value is 1. - Function: void * gsl_rng_state (const gsl_rng * R) - Function: size_t gsl_rng_size (const gsl_rng * R) These functions return a pointer to the state of generator R and its size. You can use this information to access the state directly. For example, the following code will write the state of a generator to a stream, void * state = gsl_rng_state (r); size_t n = gsl_rng_size (r); fwrite (state, n, 1, stream); - Function: const gsl_rng_type ** gsl_rng_types_setup (void) This function returns a pointer to an array of all the available generator types, terminated by a null pointer. The function should be called once at the start of the program, if needed. The following code fragment shows how to iterate over the array of generator types to print the names of the available algorithms, const gsl_rng_type **t, **t0; t0 = gsl_rng_types_setup (); printf ("Available generators:\n"); for (t = t0; *t != 0; t++) { printf ("%s\n", (*t)->name); }  File: gsl-ref.info, Node: Random number environment variables, Next: Copying random number generator state, Prev: Auxiliary random number generator functions, Up: Random Number Generation Random number environment variables =================================== The library allows you to choose a default generator and seed from the environment variables `GSL_RNG_TYPE' and `GSL_RNG_SEED' and the function `gsl_rng_env_setup'. This makes it easy try out different generators and seeds without having to recompile your program. - Function: const gsl_rng_type * gsl_rng_env_setup (void) This function reads the environment variables `GSL_RNG_TYPE' and `GSL_RNG_SEED' and uses their values to set the corresponding library variables `gsl_rng_default' and `gsl_rng_default_seed'. These global variables are defined as follows, extern const gsl_rng_type *gsl_rng_default extern unsigned long int gsl_rng_default_seed The environment variable `GSL_RNG_TYPE' should be the name of a generator, such as `taus' or `mt19937'. The environment variable `GSL_RNG_SEED' should contain the desired seed value. It is converted to an `unsigned long int' using the C library function `strtoul'. If you don't specify a generator for `GSL_RNG_TYPE' then `gsl_rng_mt19937' is used as the default. The initial value of `gsl_rng_default_seed' is zero. Here is a short program which shows how to create a global generator using the environment variables `GSL_RNG_TYPE' and `GSL_RNG_SEED', #include #include gsl_rng * r; /* global generator */ int main (void) { const gsl_rng_type * T; gsl_rng_env_setup(); T = gsl_rng_default; r = gsl_rng_alloc (T); printf ("generator type: %s\n", gsl_rng_name (r)); printf ("seed = %lu\n", gsl_rng_default_seed); printf ("first value = %lu\n", gsl_rng_get (r)); return 0; } Running the program without any environment variables uses the initial defaults, an `mt19937' generator with a seed of 0, $ ./a.out generator type: mt19937 seed = 0 first value = 4293858116 By setting the two variables on the command line we can change the default generator and the seed, $ GSL_RNG_TYPE="taus" GSL_RNG_SEED=123 ./a.out GSL_RNG_TYPE=taus GSL_RNG_SEED=123 generator type: taus seed = 123 first value = 2720986350  File: gsl-ref.info, Node: Copying random number generator state, Next: Reading and writing random number generator state, Prev: Random number environment variables, Up: Random Number Generation Copying random number generator state ===================================== The above methods do not expose the random number `state' which changes from call to call. It is often useful to be able to save and restore the state. To permit these practices, a few somewhat more advanced functions are supplied. These include: - Function: int gsl_rng_memcpy (gsl_rng * DEST, const gsl_rng * SRC) This function copies the random number generator SRC into the pre-existing generator DEST, making DEST into an exact copy of SRC. The two generators must be of the same type. - Function: gsl_rng * gsl_rng_clone (const gsl_rng * R) This function returns a pointer to a newly created generator which is an exact copy of the generator R.  File: gsl-ref.info, Node: Reading and writing random number generator state, Next: Random number generator algorithms, Prev: Copying random number generator state, Up: Random Number Generation Reading and writing random number generator state ================================================= The library provides functions for reading and writing the random number state to a file as binary data or formatted text. - Function: int gsl_rng_fwrite (FILE * STREAM, const gsl_rng * R) This function writes the random number state of the random number generator R to the stream STREAM in binary format. The return value is 0 for success and `GSL_EFAILED' if there was a problem writing to the file. Since the data is written in the native binary format it may not be portable between different architectures. - Function: int gsl_rng_fread (FILE * STREAM, gsl_rng * R) This function reads the random number state into the random number generator R from the open stream STREAM in binary format. The random number generator R must be preinitialized with the correct random number generator type since type information is not saved. The return value is 0 for success and `GSL_EFAILED' if there was a problem reading from the file. The data is assumed to have been written in the native binary format on the same architecture.  File: gsl-ref.info, Node: Random number generator algorithms, Next: Unix random number generators, Prev: Reading and writing random number generator state, Up: Random Number Generation Random number generator algorithms ================================== The functions described above make no reference to the actual algorithm used. This is deliberate so that you can switch algorithms without having to change any of your application source code. The library provides a large number of generators of different types, including simulation quality generators, generators provided for compatibility with other libraries and historical generators from the past. The following generators are recommended for use in simulation. They have extremely long periods, low correlation and pass most statistical tests. - Generator: gsl_rng_mt19937 The MT19937 generator of Makoto Matsumoto and Takuji Nishimura is a variant of the twisted generalized feedback shift-register algorithm, and is known as the "Mersenne Twister" generator. It has a Mersenne prime period of 2^19937 - 1 (about 10^6000) and is equi-distributed in 623 dimensions. It has passed the DIEHARD statistical tests. It uses 624 words of state per generator and is comparable in speed to the other generators. The original generator used a default seed of 4357 and choosing S equal to zero in `gsl_rng_set' reproduces this. For more information see, Makoto Matsumoto and Takuji Nishimura, "Mersenne Twister: A 623-dimensionally equidistributed uniform pseudorandom number generator". `ACM Transactions on Modeling and Computer Simulation', Vol. 8, No. 1 (Jan. 1998), Pages 3-30 The generator `gsl_rng_mt19937' uses the second revision of the seeding procedure published by the two authors above in 2002. The original seeding procedures could cause spurious artifacts for some seed values. They are still available through the alternative generators `gsl_rng_mt19937_1999' and `gsl_rng_mt19937_1998'. - Generator: gsl_rng_ranlxs0 - Generator: gsl_rng_ranlxs1 - Generator: gsl_rng_ranlxs2 The generator `ranlxs0' is a second-generation version of the RANLUX algorithm of Lu"scher, which produces "luxury random numbers". This generator provides single precision output (24 bits) at three luxury levels `ranlxs0', `ranlxs1' and `ranlxs2'. It uses double-precision floating point arithmetic internally and can be significantly faster than the integer version of `ranlux', particularly on 64-bit architectures. The period of the generator is about 10^171. The algorithm has mathematically proven properties and can provide truly decorrelated numbers at a known level of randomness. The higher luxury levels provide increased decorrelation between samples as an additional safety margin. - Generator: gsl_rng_ranlxd1 - Generator: gsl_rng_ranlxd2 These generators produce double precision output (48 bits) from the RANLXS generator. The library provides two luxury levels `ranlxd1' and `ranlxd2'. - Generator: gsl_rng_ranlux - Generator: gsl_rng_ranlux389 The `ranlux' generator is an implementation of the original algorithm developed by Lu"scher. It uses a lagged-fibonacci-with-skipping algorithm to produce "luxury random numbers". It is a 24-bit generator, originally designed for single-precision IEEE floating point numbers. This implementation is based on integer arithmetic, while the second-generation versions RANLXS and RANLXD described above provide floating-point implementations which will be faster on many platforms. The period of the generator is about 10^171. The algorithm has mathematically proven properties and it can provide truly decorrelated numbers at a known level of randomness. The default level of decorrelation recommended by Lu"scher is provided by `gsl_rng_ranlux', while `gsl_rng_ranlux389' gives the highest level of randomness, with all 24 bits decorrelated. Both types of generator use 24 words of state per generator. For more information see, M. Lu"scher, "A portable high-quality random number generator for lattice field theory calculations", `Computer Physics Communications', 79 (1994) 100-110. F. James, "RANLUX: A Fortran implementation of the high-quality pseudo-random number generator of Lu"scher", `Computer Physics Communications', 79 (1994) 111-114 - Generator: gsl_rng_cmrg This is a combined multiple recursive generator by L'Ecuyer. Its sequence is, z_n = (x_n - y_n) mod m_1 where the two underlying generators x_n and y_n are, x_n = (a_1 x_{n-1} + a_2 x_{n-2} + a_3 x_{n-3}) mod m_1 y_n = (b_1 y_{n-1} + b_2 y_{n-2} + b_3 y_{n-3}) mod m_2 with coefficients a_1 = 0, a_2 = 63308, a_3 = -183326, b_1 = 86098, b_2 = 0, b_3 = -539608, and moduli m_1 = 2^31 - 1 = 2147483647 and m_2 = 2145483479. The period of this generator is lcm(m_1^3-1, m_2^3-1), which is approximately 2^185 (about 10^56). It uses 6 words of state per generator. For more information see, P. L'Ecuyer, "Combined Multiple Recursive Random Number Generators", `Operations Research', 44, 5 (1996), 816-822. - Generator: gsl_rng_mrg This is a fifth-order multiple recursive generator by L'Ecuyer, Blouin and Coutre. Its sequence is, x_n = (a_1 x_{n-1} + a_5 x_{n-5}) mod m with a_1 = 107374182, a_2 = a_3 = a_4 = 0, a_5 = 104480 and m = 2^31 - 1. The period of this generator is about 10^46. It uses 5 words of state per generator. More information can be found in the following paper, P. L'Ecuyer, F. Blouin, and R. Coutre, "A search for good multiple recursive random number generators", `ACM Transactions on Modeling and Computer Simulation' 3, 87-98 (1993). - Generator: gsl_rng_taus - Generator: gsl_rng_taus2 This is a maximally equidistributed combined Tausworthe generator by L'Ecuyer. The sequence is, x_n = (s1_n ^^ s2_n ^^ s3_n) where, s1_{n+1} = (((s1_n&4294967294)<<12)^^(((s1_n<<13)^^s1_n)>>19)) s2_{n+1} = (((s2_n&4294967288)<< 4)^^(((s2_n<< 2)^^s2_n)>>25)) s3_{n+1} = (((s3_n&4294967280)<<17)^^(((s3_n<< 3)^^s3_n)>>11)) computed modulo 2^32. In the formulas above ^^ denotes "exclusive-or". Note that the algorithm relies on the properties of 32-bit unsigned integers and has been implemented using a bitmask of `0xFFFFFFFF' to make it work on 64 bit machines. The period of this generator is 2^88 (about 10^26). It uses 3 words of state per generator. For more information see, P. L'Ecuyer, "Maximally Equidistributed Combined Tausworthe Generators", `Mathematics of Computation', 65, 213 (1996), 203-213. The generator `gsl_rng_taus2' uses the same algorithm as `gsl_rng_taus' but with an improved seeding procedure described in the paper, P. L'Ecuyer, "Tables of Maximally Equidistributed Combined LFSR Generators", `Mathematics of Computation', 68, 225 (1999), 261-269 The generator `gsl_rng_taus2' should now be used in preference to `gsl_rng_taus'. - Generator: gsl_rng_gfsr4 The `gfsr4' generator is like a lagged-fibonacci generator, and produces each number as an `xor''d sum of four previous values. r_n = r_{n-A} ^^ r_{n-B} ^^ r_{n-C} ^^ r_{n-D} Ziff (ref below) notes that "it is now widely known" that two-tap registers (such as R250, which is described below) have serious flaws, the most obvious one being the three-point correlation that comes from the definition of the generator. Nice mathematical properties can be derived for GFSR's, and numerics bears out the claim that 4-tap GFSR's with appropriately chosen offsets are as random as can be measured, using the author's test. This implementation uses the values suggested the example on p392 of Ziff's article: A=471, B=1586, C=6988, D=9689. If the offsets are appropriately chosen (such as the one ones in this implementation), then the sequence is said to be maximal; that means that the period is 2^D - 1, where D is the longest lag. (It is one less than 2^D because it is not permitted to have all zeros in the `ra[]' array.) For this implementation with D=9689 that works out to about 10^2917. Note that the implementation of this generator using a 32-bit integer amounts to 32 parallel implementations of one-bit generators. One consequence of this is that the period of this 32-bit generator is the same as for the one-bit generator. Moreover, this independence means that all 32-bit patterns are equally likely, and in particular that 0 is an allowed random value. (We are grateful to Heiko Bauke for clarifying for us these properties of GFSR random number generators.) For more information see, Robert M. Ziff, "Four-tap shift-register-sequence random-number generators", `Computers in Physics', 12(4), Jul/Aug 1998, pp 385-392.  File: gsl-ref.info, Node: Unix random number generators, Next: Other random number generators, Prev: Random number generator algorithms, Up: Random Number Generation Unix random number generators ============================= The standard Unix random number generators `rand', `random' and `rand48' are provided as part of GSL. Although these generators are widely available individually often they aren't all available on the same platform. This makes it difficult to write portable code using them and so we have included the complete set of Unix generators in GSL for convenience. Note that these generators don't produce high-quality randomness and aren't suitable for work requiring accurate statistics. However, if you won't be measuring statistical quantities and just want to introduce some variation into your program then these generators are quite acceptable. - Generator: gsl_rng_rand This is the BSD `rand()' generator. Its sequence is x_{n+1} = (a x_n + c) mod m with a = 1103515245, c = 12345 and m = 2^31. The seed specifies the initial value, x_1. The period of this generator is 2^31, and it uses 1 word of storage per generator. - Generator: gsl_rng_random_bsd - Generator: gsl_rng_random_libc5 - Generator: gsl_rng_random_glibc2 These generators implement the `random()' family of functions, a set of linear feedback shift register generators originally used in BSD Unix. There are several versions of `random()' in use today: the original BSD version (e.g. on SunOS4), a libc5 version (found on older GNU/Linux systems) and a glibc2 version. Each version uses a different seeding procedure, and thus produces different sequences. The original BSD routines accepted a variable length buffer for the generator state, with longer buffers providing higher-quality randomness. The `random()' function implemented algorithms for buffer lengths of 8, 32, 64, 128 and 256 bytes, and the algorithm with the largest length that would fit into the user-supplied buffer was used. To support these algorithms additional generators are available with the following names, gsl_rng_random8_bsd gsl_rng_random32_bsd gsl_rng_random64_bsd gsl_rng_random128_bsd gsl_rng_random256_bsd where the numeric suffix indicates the buffer length. The original BSD `random' function used a 128-byte default buffer and so `gsl_rng_random_bsd' has been made equivalent to `gsl_rng_random128_bsd'. Corresponding versions of the `libc5' and `glibc2' generators are also available, with the names `gsl_rng_random8_libc5', `gsl_rng_random8_glibc2', etc. - Generator: gsl_rng_rand48 This is the Unix `rand48' generator. Its sequence is x_{n+1} = (a x_n + c) mod m defined on 48-bit unsigned integers with a = 25214903917, c = 11 and m = 2^48. The seed specifies the upper 32 bits of the initial value, x_1, with the lower 16 bits set to `0x330E'. The function `gsl_rng_get' returns the upper 32 bits from each term of the sequence. This does not have a direct parallel in the original `rand48' functions, but forcing the result to type `long int' reproduces the output of `mrand48'. The function `gsl_rng_uniform' uses the full 48 bits of internal state to return the double precision number x_n/m, which is equivalent to the function `drand48'. Note that some versions of the GNU C Library contained a bug in `mrand48' function which caused it to produce different results (only the lower 16-bits of the return value were set).  File: gsl-ref.info, Node: Other random number generators, Next: Random Number Generator Performance, Prev: Unix random number generators, Up: Random Number Generation Other random number generators ============================== The generators in this section are provided for compatibility with existing libraries. If you are converting an existing program to use GSL then you can select these generators to check your new implementation against the original one, using the same random number generator. After verifying that your new program reproduces the original results you can then switch to a higher-quality generator. Note that most of the generators in this section are based on single linear congruence relations, which are the least sophisticated type of generator. In particular, linear congruences have poor properties when used with a non-prime modulus, as several of these routines do (e.g. with a power of two modulus, 2^31 or 2^32). This leads to periodicity in the least significant bits of each number, with only the higher bits having any randomness. Thus if you want to produce a random bitstream it is best to avoid using the least significant bits. - Generator: gsl_rng_ranf This is the CRAY random number generator `RANF'. Its sequence is x_{n+1} = (a x_n) mod m defined on 48-bit unsigned integers with a = 44485709377909 and m = 2^48. The seed specifies the lower 32 bits of the initial value, x_1, with the lowest bit set to prevent the seed taking an even value. The upper 16 bits of x_1 are set to 0. A consequence of this procedure is that the pairs of seeds 2 and 3, 4 and 5, etc produce the same sequences. The generator compatible with the CRAY MATHLIB routine RANF. It produces double precision floating point numbers which should be identical to those from the original RANF. There is a subtlety in the implementation of the seeding. The initial state is reversed through one step, by multiplying by the modular inverse of a mod m. This is done for compatibility with the original CRAY implementation. Note that you can only seed the generator with integers up to 2^32, while the original CRAY implementation uses non-portable wide integers which can cover all 2^48 states of the generator. The function `gsl_rng_get' returns the upper 32 bits from each term of the sequence. The function `gsl_rng_uniform' uses the full 48 bits to return the double precision number x_n/m. The period of this generator is 2^46. - Generator: gsl_rng_ranmar This is the RANMAR lagged-fibonacci generator of Marsaglia, Zaman and Tsang. It is a 24-bit generator, originally designed for single-precision IEEE floating point numbers. It was included in the CERNLIB high-energy physics library. - Generator: gsl_rng_r250 This is the shift-register generator of Kirkpatrick and Stoll. The sequence is based on the recurrence x_n = x_{n-103} ^^ x_{n-250} where ^^ denotes "exclusive-or", defined on 32-bit words. The period of this generator is about 2^250 and it uses 250 words of state per generator. For more information see, S. Kirkpatrick and E. Stoll, "A very fast shift-register sequence random number generator", `Journal of Computational Physics', 40, 517-526 (1981) - Generator: gsl_rng_tt800 This is an earlier version of the twisted generalized feedback shift-register generator, and has been superseded by the development of MT19937. However, it is still an acceptable generator in its own right. It has a period of 2^800 and uses 33 words of storage per generator. For more information see, Makoto Matsumoto and Yoshiharu Kurita, "Twisted GFSR Generators II", `ACM Transactions on Modelling and Computer Simulation', Vol. 4, No. 3, 1994, pages 254-266. - Generator: gsl_rng_vax This is the VAX generator `MTH$RANDOM'. Its sequence is, x_{n+1} = (a x_n + c) mod m with a = 69069, c = 1 and m = 2^32. The seed specifies the initial value, x_1. The period of this generator is 2^32 and it uses 1 word of storage per generator. - Generator: gsl_rng_transputer This is the random number generator from the INMOS Transputer Development system. Its sequence is, x_{n+1} = (a x_n) mod m with a = 1664525 and m = 2^32. The seed specifies the initial value, x_1. - Generator: gsl_rng_randu This is the IBM `RANDU' generator. Its sequence is x_{n+1} = (a x_n) mod m with a = 65539 and m = 2^31. The seed specifies the initial value, x_1. The period of this generator was only 2^29. It has become a textbook example of a poor generator. - Generator: gsl_rng_minstd This is Park and Miller's "minimal standard" MINSTD generator, a simple linear congruence which takes care to avoid the major pitfalls of such algorithms. Its sequence is, x_{n+1} = (a x_n) mod m with a = 16807 and m = 2^31 - 1 = 2147483647. The seed specifies the initial value, x_1. The period of this generator is about 2^31. This generator is used in the IMSL Library (subroutine RNUN) and in MATLAB (the RAND function). It is also sometimes known by the acronym "GGL" (I'm not sure what that stands for). For more information see, Park and Miller, "Random Number Generators: Good ones are hard to find", `Communications of the ACM', October 1988, Volume 31, No 10, pages 1192-1201. - Generator: gsl_rng_uni - Generator: gsl_rng_uni32 This is a reimplementation of the 16-bit SLATEC random number generator RUNIF. A generalization of the generator to 32 bits is provided by `gsl_rng_uni32'. The original source code is available from NETLIB. - Generator: gsl_rng_slatec This is the SLATEC random number generator RAND. It is ancient. The original source code is available from NETLIB. - Generator: gsl_rng_zuf This is the ZUFALL lagged Fibonacci series generator of Peterson. Its sequence is, t = u_{n-273} + u_{n-607} u_n = t - floor(t) The original source code is available from NETLIB. For more information see, W. Petersen, "Lagged Fibonacci Random Number Generators for the NEC SX-3", `International Journal of High Speed Computing' (1994). - Generator: gsl_rng_borosh13 This is the Borosh-Niederreiter random number generator. It is taken from Knuth's `Seminumerical Algorithms', 3rd Ed., pages 106-108. Its sequence is, x_{n+1} = (a x_n) mod m with a = 1812433253 and m = 2^32. The seed specifies the initial value, x_1. - Generator: gsl_rng_coveyou This is the Coveyou random number generator. It is taken from Knuth's `Seminumerical Algorithms', 3rd Ed., Section 3.2.2. Its sequence is, x_{n+1} = (x_n (x_n + 1)) mod m with m = 2^32. The seed specifies the initial value, x_1. - Generator: gsl_rng_fishman18 This is the Fishman, Moore III random number generator. It is taken from Knuth's `Seminumerical Algorithms', 3rd Ed., pages 106-108. Its sequence is, x_{n+1} = (a x_n) mod m with a = 62089911 and m = 2^31 - 1. The seed specifies the initial value, x_1. - Generator: gsl_rng_fishman20 This is the Fishman random number generator. It is taken from Knuth's `Seminumerical Algorithms', 3rd Ed., page 108. Its sequence is, x_{n+1} = (a x_n) mod m with a = 48271 and m = 2^31 - 1. The seed specifies the initial value, x_1. - Generator: gsl_rng_fishman2x This is the L'Ecuyer-Fishman random number generator. It is taken from Knuth's `Seminumerical Algorithms', 3rd Ed., page 108. Its sequence is, z_{n+1} = (x_n - y_n) mod m with m = 2^31 - 1. x_n and y_n are given by the `fishman20' and `lecuyer21' algorithms. The seed specifies the initial value, x_1. - Generator: gsl_rng_knuthran2 This is a second-order multiple recursive generator described by Knuth in `Seminumerical Algorithms', 3rd Ed., page 108. Its sequence is, x_n = (a_1 x_{n-1} + a_2 x_{n-2}) mod m with a_1 = 271828183, a_2 = 314159269, and m = 2^31 - 1. - Generator: gsl_rng_knuthran This is a second-order multiple recursive generator described by Knuth in `Seminumerical Algorithms', 3rd Ed., Section 3.6. Knuth provides its C code. - Generator: gsl_rng_lecuyer21 This is the L'Ecuyer random number generator. It is taken from Knuth's `Seminumerical Algorithms', 3rd Ed., page 106-108. Its sequence is, x_{n+1} = (a x_n) mod m with a = 40692 and m = 2^31 - 249. The seed specifies the initial value, x_1. - Generator: gsl_rng_waterman14 This is the Waterman random number generator. It is taken from Knuth's `Seminumerical Algorithms', 3rd Ed., page 106-108. Its sequence is, x_{n+1} = (a x_n) mod m with a = 1566083941 and m = 2^32. The seed specifies the initial value, x_1.  File: gsl-ref.info, Node: Random Number Generator Performance, Next: Random Number Generator Examples, Prev: Other random number generators, Up: Random Number Generation Performance =========== The following table shows the relative performance of a selection the available random number generators. The fastest simulation quality generators are `taus', `gfsr4' and `mt19937'. The generators which offer the best mathematically-proven quality are those based on the RANLUX algorithm. 1754 k ints/sec, 870 k doubles/sec, taus 1613 k ints/sec, 855 k doubles/sec, gfsr4 1370 k ints/sec, 769 k doubles/sec, mt19937 565 k ints/sec, 571 k doubles/sec, ranlxs0 400 k ints/sec, 405 k doubles/sec, ranlxs1 490 k ints/sec, 389 k doubles/sec, mrg 407 k ints/sec, 297 k doubles/sec, ranlux 243 k ints/sec, 254 k doubles/sec, ranlxd1 251 k ints/sec, 253 k doubles/sec, ranlxs2 238 k ints/sec, 215 k doubles/sec, cmrg 247 k ints/sec, 198 k doubles/sec, ranlux389 141 k ints/sec, 140 k doubles/sec, ranlxd2 1852 k ints/sec, 935 k doubles/sec, ran3 813 k ints/sec, 575 k doubles/sec, ran0 787 k ints/sec, 476 k doubles/sec, ran1 379 k ints/sec, 292 k doubles/sec, ran2  File: gsl-ref.info, Node: Random Number Generator Examples, Next: Random Number References and Further Reading, Prev: Random Number Generator Performance, Up: Random Number Generation Examples ======== The following program demonstrates the use of a random number generator to produce uniform random numbers in the range [0.0, 1.0), #include #include int main (void) { const gsl_rng_type * T; gsl_rng * r; int i, n = 10; gsl_rng_env_setup(); T = gsl_rng_default; r = gsl_rng_alloc (T); for (i = 0; i < n; i++) { double u = gsl_rng_uniform (r); printf ("%.5f\n", u); } gsl_rng_free (r); return 0; } Here is the output of the program, $ ./a.out 0.99974 0.16291 0.28262 0.94720 0.23166 0.48497 0.95748 0.74431 0.54004 0.73995 The numbers depend on the seed used by the generator. The default seed can be changed with the `GSL_RNG_SEED' environment variable to produce a different stream of numbers. The generator itself can be changed using the environment variable `GSL_RNG_TYPE'. Here is the output of the program using a seed value of 123 and the multiple-recursive generator `mrg', $ GSL_RNG_SEED=123 GSL_RNG_TYPE=mrg ./a.out GSL_RNG_TYPE=mrg GSL_RNG_SEED=123 0.33050 0.86631 0.32982 0.67620 0.53391 0.06457 0.16847 0.70229 0.04371 0.86374  File: gsl-ref.info, Node: Random Number References and Further Reading, Next: Random Number Acknowledgements, Prev: Random Number Generator Examples, Up: Random Number Generation References and Further Reading ============================== The subject of random number generation and testing is reviewed extensively in Knuth's `Seminumerical Algorithms'. Donald E. Knuth, `The Art of Computer Programming: Seminumerical Algorithms' (Vol 2, 3rd Ed, 1997), Addison-Wesley, ISBN 0201896842. Further information is available in the review paper written by Pierre L'Ecuyer, P. L'Ecuyer, "Random Number Generation", Chapter 4 of the Handbook on Simulation, Jerry Banks Ed., Wiley, 1998, 93-137. `http://www.iro.umontreal.ca/~lecuyer/papers.html' in the file `handsim.ps'. The source code for the DIEHARD random number generator tests is also available online, `DIEHARD source code' G. Marsaglia, `http://stat.fsu.edu/pub/diehard/' A comprehensive set of random number generator tests is available from NIST, NIST Special Publication 800-22, "A Statistical Test Suite for the Validation of Random Number Generators and Pseudo Random Number Generators for Cryptographic Applications". `http://csrc.nist.gov/rng/'  File: gsl-ref.info, Node: Random Number Acknowledgements, Prev: Random Number References and Further Reading, Up: Random Number Generation Acknowledgements ================ Thanks to Makoto Matsumoto, Takuji Nishimura and Yoshiharu Kurita for making the source code to their generators (MT19937, MM&TN; TT800, MM&YK) available under the GNU General Public License. Thanks to Martin Lu"scher for providing notes and source code for the RANLXS and RANLXD generators.  File: gsl-ref.info, Node: Quasi-Random Sequences, Next: Random Number Distributions, Prev: Random Number Generation, Up: Top Quasi-Random Sequences ********************** This chapter describes functions for generating quasi-random sequences in arbitrary dimensions. A quasi-random sequence progressively covers a d-dimensional space with a set of points that are uniformly distributed. Quasi-random sequences are also known as low-discrepancy sequences. The quasi-random sequence generators use an interface that is similar to the interface for random number generators, except that seeding is not required--each generator produces a single sequence. The functions described in this section are declared in the header file `gsl_qrng.h'. * Menu: * Quasi-random number generator initialization:: * Sampling from a quasi-random number generator:: * Auxiliary quasi-random number generator functions:: * Saving and resorting quasi-random number generator state:: * Quasi-random number generator algorithms:: * Quasi-random number generator examples:: * Quasi-random number references::  File: gsl-ref.info, Node: Quasi-random number generator initialization, Next: Sampling from a quasi-random number generator, Up: Quasi-Random Sequences Quasi-random number generator initialization ============================================ - Function: gsl_qrng * gsl_qrng_alloc (const gsl_qrng_type * T, unsigned int D) This function returns a pointer to a newly-created instance of a quasi-random sequence generator of type T and dimension D. If there is insufficient memory to create the generator then the function returns a null pointer and the error handler is invoked with an error code of `GSL_ENOMEM'. - Function: void gsl_qrng_free (gsl_qrng * Q) This function frees all the memory associated with the generator Q. - Function: void gsl_qrng_init (gsl_qrng * Q) This function reinitializes the generator Q to its starting point. Note that quasi-random sequences do not use a seed and always produce the same set of values.  File: gsl-ref.info, Node: Sampling from a quasi-random number generator, Next: Auxiliary quasi-random number generator functions, Prev: Quasi-random number generator initialization, Up: Quasi-Random Sequences Sampling from a quasi-random number generator ============================================= - Function: int gsl_qrng_get (const gsl_qrng * Q, double X[]) This function stores the next point from the sequence generator Q in the array X. The space available for X must match the dimension of the generator. The point X will lie in the range 0 < x_i < 1 for each x_i.  File: gsl-ref.info, Node: Auxiliary quasi-random number generator functions, Next: Saving and resorting quasi-random number generator state, Prev: Sampling from a quasi-random number generator, Up: Quasi-Random Sequences Auxiliary quasi-random number generator functions ================================================= - Function: const char * gsl_qrng_name (const gsl_qrng * Q) This function returns a pointer to the name of the generator. - Function: size_t gsl_qrng_size (const gsl_qrng * Q) - Function: void * gsl_qrng_state (const gsl_qrng * Q) These functions return a pointer to the state of generator R and its size. You can use this information to access the state directly. For example, the following code will write the state of a generator to a stream, void * state = gsl_qrng_state (q); size_t n = gsl_qrng_size (q); fwrite (state, n, 1, stream);  File: gsl-ref.info, Node: Saving and resorting quasi-random number generator state, Next: Quasi-random number generator algorithms, Prev: Auxiliary quasi-random number generator functions, Up: Quasi-Random Sequences Saving and resorting quasi-random number generator state ======================================================== - Function: int gsl_qrng_memcpy (gsl_qrng * DEST, const gsl_qrng * SRC) This function copies the quasi-random sequence generator SRC into the pre-existing generator DEST, making DEST into an exact copy of SRC. The two generators must be of the same type. - Function: gsl_qrng * gsl_qrng_clone (const gsl_qrng * Q) This function returns a pointer to a newly created generator which is an exact copy of the generator Q.  File: gsl-ref.info, Node: Quasi-random number generator algorithms, Next: Quasi-random number generator examples, Prev: Saving and resorting quasi-random number generator state, Up: Quasi-Random Sequences Quasi-random number generator algorithms ======================================== The following quasi-random sequence algorithms are available, - Generator: gsl_qrng_niederreiter_2 This generator uses the algorithm described in Bratley, Fox, Niederreiter, `ACM Trans. Model. Comp. Sim.' 2, 195 (1992). It is valid up to 12 dimensions. - Generator: gsl_qrng_sobol This generator uses the Sobol sequence described in Antonov, Saleev, `USSR Comput. Maths. Math. Phys.' 19, 252 (1980). It is valid up to 40 dimensions.  File: gsl-ref.info, Node: Quasi-random number generator examples, Next: Quasi-random number references, Prev: Quasi-random number generator algorithms, Up: Quasi-Random Sequences Examples ======== The following program prints the first 1024 points of the 2-dimensional Sobol sequence. #include #include int main (void) { int i; gsl_qrng * q = gsl_qrng_alloc (gsl_qrng_sobol, 2); for (i = 0; i < 1024; i++) { double v[2]; gsl_qrng_get (q, v); printf ("%.5f %.5f\n", v[0], v[1]); } gsl_qrng_free (q); return 0; } Here is the output from the program, $ ./a.out 0.50000 0.50000 0.75000 0.25000 0.25000 0.75000 0.37500 0.37500 0.87500 0.87500 0.62500 0.12500 0.12500 0.62500 .... It can be seen that successive points progressively fill-in the spaces between previous points.  File: gsl-ref.info, Node: Quasi-random number references, Prev: Quasi-random number generator examples, Up: Quasi-Random Sequences References ========== The implementations of the quasi-random sequence routines are based on the algorithms described in the following paper, P. Bratley and B.L. Fox and H. Niederreiter, "Algorithm 738: Programs to Generate Niederreiter's Low-discrepancy Sequences", `ACM Transactions on Mathematical Software', Vol. 20, No. 4, December, 1994, p. 494-495.  File: gsl-ref.info, Node: Random Number Distributions, Next: Statistics, Prev: Quasi-Random Sequences, Up: Top Random Number Distributions *************************** This chapter describes functions for generating random variates and computing their probability distributions. Samples from the distributions described in this chapter can be obtained using any of the random number generators in the library as an underlying source of randomness. In the simplest cases a non-uniform distribution can be obtained analytically from the uniform distribution of a random number generator by applying an appropriate transformation. This method uses one call to the random number generator. More complicated distributions are created by the "acceptance-rejection" method, which compares the desired distribution against a distribution which is similar and known analytically. This usually requires several samples from the generator. The library also provides cumulative distribution functions and inverse cumulative distribution functions, sometimes referred to as quantile functions. The cumulative distribution functions and their inverses are computed separately for the upper and lower tails of the distribution, allowing full accuracy to be retained for small results. The functions for random variates and probability density functions described in this section are declared in `gsl_randist.h'. The corresponding cumulative distribution functions are declared in `gsl_cdf.h'. Note that the discrete random variate functions always return a value of type `unsigned int', and on most platforms this has a maximum value of 2^32-1 ~=~ 4.29e9. They should only be called with a safe range of parameters (where there is a negligible probability of a variate exceeding this limit) to prevent incorrect results due to overflow. * Menu: * Random Number Distribution Introduction:: * The Gaussian Distribution:: * The Gaussian Tail Distribution:: * The Bivariate Gaussian Distribution:: * The Exponential Distribution:: * The Laplace Distribution:: * The Exponential Power Distribution:: * The Cauchy Distribution:: * The Rayleigh Distribution:: * The Rayleigh Tail Distribution:: * The Landau Distribution:: * The Levy alpha-Stable Distributions:: * The Levy skew alpha-Stable Distribution:: * The Gamma Distribution:: * The Flat (Uniform) Distribution:: * The Lognormal Distribution:: * The Chi-squared Distribution:: * The F-distribution:: * The t-distribution:: * The Beta Distribution:: * The Logistic Distribution:: * The Pareto Distribution:: * Spherical Vector Distributions:: * The Weibull Distribution:: * The Type-1 Gumbel Distribution:: * The Type-2 Gumbel Distribution:: * The Dirichlet Distribution:: * General Discrete Distributions:: * The Poisson Distribution:: * The Bernoulli Distribution:: * The Binomial Distribution:: * The Multinomial Distribution:: * The Negative Binomial Distribution:: * The Pascal Distribution:: * The Geometric Distribution:: * The Hypergeometric Distribution:: * The Logarithmic Distribution:: * Shuffling and Sampling:: * Random Number Distribution Examples:: * Random Number Distribution References and Further Reading::  File: gsl-ref.info, Node: Random Number Distribution Introduction, Next: The Gaussian Distribution, Up: Random Number Distributions Introduction ============ Continuous random number distributions are defined by a probability density function, p(x), such that the probability of x occurring in the infinitesimal range x to x+dx is p dx. The cumulative distribution function for the lower tail P(x) is defined by the integral, P(x) = \int_{-\infty}^{x} dx' p(x') and gives the probability of a variate taking a value less than x. The cumulative distribution function for the upper tail Q(x) is defined by the integral, Q(x) = \int_{x}^{+\infty} dx' p(x') and gives the probability of a variate taking a value greater than x. The upper and lower cumulative distribution functions are related by P(x) + Q(x) = 1 and satisfy 0 <= P(x) <= 1, 0 <= Q(x) <= 1. The inverse cumulative distributions, x=P^{-1}(P) and x=Q^{-1}(Q) give the values of x which correspond to a specific value of P or Q. They can be used to find confidence limits from probability values. For discrete distributions the probability of sampling the integer value k is given by p(k), where \sum_k p(k) = 1. The cumulative distribution for the lower tail P(k) of a discrete distribution is defined as, P(k) = \sum_{i <= k} p(i) where the sum is over the allowed range of the distribution less than or equal to k. The cumulative distribution for the upper tail of a discrete distribution Q(k) is defined as Q(k) = \sum_{i > k} p(i) giving the sum of probabilities for all values greater than k. These two definitions satisfy the identity P(k)+Q(k)=1. If the range of the distribution is 1 to n inclusive then P(n)=1, Q(n)=0 while P(1) = p(1), Q(1)=1-p(1).  File: gsl-ref.info, Node: The Gaussian Distribution, Next: The Gaussian Tail Distribution, Prev: Random Number Distribution Introduction, Up: Random Number Distributions The Gaussian Distribution ========================= - Function: double gsl_ran_gaussian (const gsl_rng * R, double SIGMA) This function returns a Gaussian random variate, with mean zero and standard deviation SIGMA. The probability distribution for Gaussian random variates is, p(x) dx = {1 \over \sqrt{2 \pi \sigma^2}} \exp (-x^2 / 2\sigma^2) dx for x in the range -\infty to +\infty. Use the transformation z = \mu + x on the numbers returned by `gsl_ran_gaussian' to obtain a Gaussian distribution with mean \mu. This function uses the Box-Mueller algorithm which requires two calls to the random number generator R. - Function: double gsl_ran_gaussian_pdf (double X, double SIGMA) This function computes the probability density p(x) at X for a Gaussian distribution with standard deviation SIGMA, using the formula given above. - Function: double gsl_ran_gaussian_ziggurat (const gsl_rng * R, double SIGMA) - Function: double gsl_ran_gaussian_ratio_method (const gsl_rng * R, double SIGMA) This function computes a Gaussian random variate using the alternative Marsaglia-Tsang ziggurat and Kinderman-Monahan-Leva ratio methods. The Ziggurat algorithm is the fastest available algorithm in most cases. - Function: double gsl_ran_ugaussian (const gsl_rng * R) - Function: double gsl_ran_ugaussian_pdf (double X) - Function: double gsl_ran_ugaussian_ratio_method (const gsl_rng * R) These functions compute results for the unit Gaussian distribution. They are equivalent to the functions above with a standard deviation of one, SIGMA = 1. - Function: double gsl_cdf_gaussian_P (double X, double SIGMA) - Function: double gsl_cdf_gaussian_Q (double X, double SIGMA) - Function: double gsl_cdf_gaussian_Pinv (double P, double SIGMA) - Function: double gsl_cdf_gaussian_Qinv (double Q, double SIGMA) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the Gaussian distribution with standard deviation SIGMA. - Function: double gsl_cdf_ugaussian_P (double X) - Function: double gsl_cdf_ugaussian_Q (double X) - Function: double gsl_cdf_ugaussian_Pinv (double P) - Function: double gsl_cdf_ugaussian_Qinv (double Q) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the unit Gaussian distribution.  File: gsl-ref.info, Node: The Gaussian Tail Distribution, Next: The Bivariate Gaussian Distribution, Prev: The Gaussian Distribution, Up: Random Number Distributions The Gaussian Tail Distribution ============================== - Function: double gsl_ran_gaussian_tail (const gsl_rng * R, double A, double SIGMA) This function provides random variates from the upper tail of a Gaussian distribution with standard deviation SIGMA. The values returned are larger than the lower limit A, which must be positive. The method is based on Marsaglia's famous rectangle-wedge-tail algorithm (Ann. Math. Stat. 32, 894-899 (1961)), with this aspect explained in Knuth, v2, 3rd ed, p139,586 (exercise 11). The probability distribution for Gaussian tail random variates is, p(x) dx = {1 \over N(a;\sigma) \sqrt{2 \pi \sigma^2}} \exp (- x^2/(2 \sigma^2)) dx for x > a where N(a;\sigma) is the normalization constant, N(a;\sigma) = (1/2) erfc(a / sqrt(2 sigma^2)). - Function: double gsl_ran_gaussian_tail_pdf (double X, double A, double SIGMA) This function computes the probability density p(x) at X for a Gaussian tail distribution with standard deviation SIGMA and lower limit A, using the formula given above. - Function: double gsl_ran_ugaussian_tail (const gsl_rng * R, double A) - Function: double gsl_ran_ugaussian_tail_pdf (double X, double A) These functions compute results for the tail of a unit Gaussian distribution. They are equivalent to the functions above with a standard deviation of one, SIGMA = 1.  File: gsl-ref.info, Node: The Bivariate Gaussian Distribution, Next: The Exponential Distribution, Prev: The Gaussian Tail Distribution, Up: Random Number Distributions The Bivariate Gaussian Distribution =================================== - Function: void gsl_ran_bivariate_gaussian (const gsl_rng * R, double SIGMA_X, double SIGMA_Y, double RHO, double * X, double * Y) This function generates a pair of correlated Gaussian variates, with mean zero, correlation coefficient RHO and standard deviations SIGMA_X and SIGMA_Y in the x and y directions. The probability distribution for bivariate Gaussian random variates is, p(x,y) dx dy = {1 \over 2 \pi \sigma_x \sigma_y \sqrt{1-\rho^2}} \exp (-(x^2/\sigma_x^2 + y^2/\sigma_y^2 - 2 \rho x y/(\sigma_x\sigma_y))/2(1-\rho^2)) dx dy for x,y in the range -\infty to +\infty. The correlation coefficient RHO should lie between 1 and -1. - Function: double gsl_ran_bivariate_gaussian_pdf (double X, double Y, double SIGMA_X, double SIGMA_Y, double RHO) This function computes the probability density p(x,y) at (X,Y) for a bivariate Gaussian distribution with standard deviations SIGMA_X, SIGMA_Y and correlation coefficient RHO, using the formula given above.  File: gsl-ref.info, Node: The Exponential Distribution, Next: The Laplace Distribution, Prev: The Bivariate Gaussian Distribution, Up: Random Number Distributions The Exponential Distribution ============================ - Function: double gsl_ran_exponential (const gsl_rng * R, double MU) This function returns a random variate from the exponential distribution with mean MU. The distribution is, p(x) dx = {1 \over \mu} \exp(-x/\mu) dx for x >= 0. - Function: double gsl_ran_exponential_pdf (double X, double MU) This function computes the probability density p(x) at X for an exponential distribution with mean MU, using the formula given above. - Function: double gsl_cdf_exponential_P (double X, double MU) - Function: double gsl_cdf_exponential_Q (double X, double MU) - Function: double gsl_cdf_exponential_Pinv (double P, double MU) - Function: double gsl_cdf_exponential_Qinv (double Q, double MU) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the exponential distribution with mean MU.  File: gsl-ref.info, Node: The Laplace Distribution, Next: The Exponential Power Distribution, Prev: The Exponential Distribution, Up: Random Number Distributions The Laplace Distribution ======================== - Function: double gsl_ran_laplace (const gsl_rng * R, double A) This function returns a random variate from the Laplace distribution with width A. The distribution is, p(x) dx = {1 \over 2 a} \exp(-|x/a|) dx for -\infty < x < \infty. - Function: double gsl_ran_laplace_pdf (double X, double A) This function computes the probability density p(x) at X for a Laplace distribution with width A, using the formula given above. - Function: double gsl_cdf_laplace_P (double X, double A) - Function: double gsl_cdf_laplace_Q (double X, double A) - Function: double gsl_cdf_laplace_Pinv (double P, double A) - Function: double gsl_cdf_laplace_Qinv (double Q, double A) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the Laplace distribution with width A.  File: gsl-ref.info, Node: The Exponential Power Distribution, Next: The Cauchy Distribution, Prev: The Laplace Distribution, Up: Random Number Distributions The Exponential Power Distribution ================================== - Function: double gsl_ran_exppow (const gsl_rng * R, double A, double B) This function returns a random variate from the exponential power distribution with scale parameter A and exponent B. The distribution is, p(x) dx = {1 \over 2 a \Gamma(1+1/b)} \exp(-|x/a|^b) dx for x >= 0. For b = 1 this reduces to the Laplace distribution. For b = 2 it has the same form as a gaussian distribution, but with a = \sqrt{2} \sigma. - Function: double gsl_ran_exppow_pdf (double X, double A, double B) This function computes the probability density p(x) at X for an exponential power distribution with scale parameter A and exponent B, using the formula given above. - Function: double gsl_cdf_exppow_P (double X, double A, double B) - Function: double gsl_cdf_exppow_Q (double X, double A, double B) These functions compute the cumulative distribution functions P(x), Q(x) for the exponential power distribution with parameters A and B.  File: gsl-ref.info, Node: The Cauchy Distribution, Next: The Rayleigh Distribution, Prev: The Exponential Power Distribution, Up: Random Number Distributions The Cauchy Distribution ======================= - Function: double gsl_ran_cauchy (const gsl_rng * R, double A) This function returns a random variate from the Cauchy distribution with scale parameter A. The probability distribution for Cauchy random variates is, p(x) dx = {1 \over a\pi (1 + (x/a)^2) } dx for x in the range -\infty to +\infty. The Cauchy distribution is also known as the Lorentz distribution. - Function: double gsl_ran_cauchy_pdf (double X, double A) This function computes the probability density p(x) at X for a Cauchy distribution with scale parameter A, using the formula given above. - Function: double gsl_cdf_cauchy_P (double X, double A) - Function: double gsl_cdf_cauchy_Q (double X, double A) - Function: double gsl_cdf_cauchy_Pinv (double P, double A) - Function: double gsl_cdf_cauchy_Qinv (double Q, double A) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the Cauchy distribution with scale parameter A.  File: gsl-ref.info, Node: The Rayleigh Distribution, Next: The Rayleigh Tail Distribution, Prev: The Cauchy Distribution, Up: Random Number Distributions The Rayleigh Distribution ========================= - Function: double gsl_ran_rayleigh (const gsl_rng * R, double SIGMA) This function returns a random variate from the Rayleigh distribution with scale parameter SIGMA. The distribution is, p(x) dx = {x \over \sigma^2} \exp(- x^2/(2 \sigma^2)) dx for x > 0. - Function: double gsl_ran_rayleigh_pdf (double X, double SIGMA) This function computes the probability density p(x) at X for a Rayleigh distribution with scale parameter SIGMA, using the formula given above. - Function: double gsl_cdf_rayleigh_P (double X, double SIGMA) - Function: double gsl_cdf_rayleigh_Q (double X, double SIGMA) - Function: double gsl_cdf_rayleigh_Pinv (double P, double SIGMA) - Function: double gsl_cdf_rayleigh_Qinv (double Q, double SIGMA) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the Rayleigh distribution with scale parameter SIGMA.  File: gsl-ref.info, Node: The Rayleigh Tail Distribution, Next: The Landau Distribution, Prev: The Rayleigh Distribution, Up: Random Number Distributions The Rayleigh Tail Distribution ============================== - Function: double gsl_ran_rayleigh_tail (const gsl_rng * R, double A, double SIGMA) This function returns a random variate from the tail of the Rayleigh distribution with scale parameter SIGMA and a lower limit of A. The distribution is, p(x) dx = {x \over \sigma^2} \exp ((a^2 - x^2) /(2 \sigma^2)) dx for x > a. - Function: double gsl_ran_rayleigh_tail_pdf (double X, double A, double SIGMA) This function computes the probability density p(x) at X for a Rayleigh tail distribution with scale parameter SIGMA and lower limit A, using the formula given above.  File: gsl-ref.info, Node: The Landau Distribution, Next: The Levy alpha-Stable Distributions, Prev: The Rayleigh Tail Distribution, Up: Random Number Distributions The Landau Distribution ======================= - Function: double gsl_ran_landau (const gsl_rng * R) This function returns a random variate from the Landau distribution. The probability distribution for Landau random variates is defined analytically by the complex integral, p(x) = (1/(2 \pi i)) \int_{c-i\infty}^{c+i\infty} ds exp(s log(s) + x s) For numerical purposes it is more convenient to use the following equivalent form of the integral, p(x) = (1/\pi) \int_0^\infty dt \exp(-t \log(t) - x t) \sin(\pi t). - Function: double gsl_ran_landau_pdf (double X) This function computes the probability density p(x) at X for the Landau distribution using an approximation to the formula given above.  File: gsl-ref.info, Node: The Levy alpha-Stable Distributions, Next: The Levy skew alpha-Stable Distribution, Prev: The Landau Distribution, Up: Random Number Distributions The Levy alpha-Stable Distributions =================================== - Function: double gsl_ran_levy (const gsl_rng * R, double C, double ALPHA) This function returns a random variate from the Levy symmetric stable distribution with scale C and exponent ALPHA. The symmetric stable probability distribution is defined by a fourier transform, p(x) = {1 \over 2 \pi} \int_{-\infty}^{+\infty} dt \exp(-it x - |c t|^alpha) There is no explicit solution for the form of p(x) and the library does not define a corresponding `pdf' function. For \alpha = 1 the distribution reduces to the Cauchy distribution. For \alpha = 2 it is a Gaussian distribution with \sigma = \sqrt{2} c. For \alpha < 1 the tails of the distribution become extremely wide. The algorithm only works for 0 < alpha <= 2.  File: gsl-ref.info, Node: The Levy skew alpha-Stable Distribution, Next: The Gamma Distribution, Prev: The Levy alpha-Stable Distributions, Up: Random Number Distributions The Levy skew alpha-Stable Distribution ======================================= - Function: double gsl_ran_levy_skew (const gsl_rng * R, double C, double ALPHA, double BETA) This function returns a random variate from the Levy skew stable distribution with scale C, exponent ALPHA and skewness parameter BETA. The skewness parameter must lie in the range [-1,1]. The Levy skew stable probability distribution is defined by a fourier transform, p(x) = {1 \over 2 \pi} \int_{-\infty}^{+\infty} dt \exp(-it x - |c t|^alpha (1-i beta sign(t) tan(pi alpha/2))) When \alpha = 1 the term \tan(\pi \alpha/2) is replaced by -(2/\pi)\log|t|. There is no explicit solution for the form of p(x) and the library does not define a corresponding `pdf' function. For \alpha = 2 the distribution reduces to a Gaussian distribution with \sigma = \sqrt{2} c and the skewness parameter has no effect. For \alpha < 1 the tails of the distribution become extremely wide. The symmetric distribution corresponds to \beta = 0. The algorithm only works for 0 < alpha <= 2. The Levy alpha-stable distributions have the property that if N alpha-stable variates are drawn from the distribution p(c, \alpha, \beta) then the sum Y = X_1 + X_2 + \dots + X_N will also be distributed as an alpha-stable variate, p(N^(1/\alpha) c, \alpha, \beta).  File: gsl-ref.info, Node: The Gamma Distribution, Next: The Flat (Uniform) Distribution, Prev: The Levy skew alpha-Stable Distribution, Up: Random Number Distributions The Gamma Distribution ====================== - Function: double gsl_ran_gamma (const gsl_rng * R, double A, double B) This function returns a random variate from the gamma distribution. The distribution function is, p(x) dx = {1 \over \Gamma(a) b^a} x^{a-1} e^{-x/b} dx for x > 0. The gamma distribution with an integer parameter A is known as the Erlang distribution. The variates are computed using the algorithms from Knuth (vol 2). - Function: double gsl_ran_gamma_mt (const gsl_rng * R, double A, double B) This function returns a gamma variate using the Marsaglia-Tsang fast gamma method. - Function: double gsl_ran_gamma_pdf (double X, double A, double B) This function computes the probability density p(x) at X for a gamma distribution with parameters A and B, using the formula given above. - Function: double gsl_cdf_gamma_P (double X, double A, double B) - Function: double gsl_cdf_gamma_Q (double X, double A, double B) - Function: double gsl_cdf_gamma_Pinv (double P, double A, double B) - Function: double gsl_cdf_gamma_Qinv (double Q, double A, double B) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the gamma distribution with parameters A and B.  File: gsl-ref.info, Node: The Flat (Uniform) Distribution, Next: The Lognormal Distribution, Prev: The Gamma Distribution, Up: Random Number Distributions The Flat (Uniform) Distribution =============================== - Function: double gsl_ran_flat (const gsl_rng * R, double A, double B) This function returns a random variate from the flat (uniform) distribution from A to B. The distribution is, p(x) dx = {1 \over (b-a)} dx if a <= x < b and 0 otherwise. - Function: double gsl_ran_flat_pdf (double X, double A, double B) This function computes the probability density p(x) at X for a uniform distribution from A to B, using the formula given above. - Function: double gsl_cdf_flat_P (double X, double A, double B) - Function: double gsl_cdf_flat_Q (double X, double A, double B) - Function: double gsl_cdf_flat_Pinv (double P, double A, double B) - Function: double gsl_cdf_flat_Qinv (double Q, double A, double B) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for a uniform distribution from A to B.  File: gsl-ref.info, Node: The Lognormal Distribution, Next: The Chi-squared Distribution, Prev: The Flat (Uniform) Distribution, Up: Random Number Distributions The Lognormal Distribution ========================== - Function: double gsl_ran_lognormal (const gsl_rng * R, double ZETA, double SIGMA) This function returns a random variate from the lognormal distribution. The distribution function is, p(x) dx = {1 \over x \sqrt{2 \pi \sigma^2} } \exp(-(\ln(x) - \zeta)^2/2 \sigma^2) dx for x > 0. - Function: double gsl_ran_lognormal_pdf (double X, double ZETA, double SIGMA) This function computes the probability density p(x) at X for a lognormal distribution with parameters ZETA and SIGMA, using the formula given above. - Function: double gsl_cdf_lognormal_P (double X, double ZETA, double SIGMA) - Function: double gsl_cdf_lognormal_Q (double X, double ZETA, double SIGMA) - Function: double gsl_cdf_lognormal_Pinv (double P, double ZETA, double SIGMA) - Function: double gsl_cdf_lognormal_Qinv (double Q, double ZETA, double SIGMA) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the lognormal distribution with parameters ZETA and SIGMA.  File: gsl-ref.info, Node: The Chi-squared Distribution, Next: The F-distribution, Prev: The Lognormal Distribution, Up: Random Number Distributions The Chi-squared Distribution ============================ The chi-squared distribution arises in statistics. If Y_i are n independent gaussian random variates with unit variance then the sum-of-squares, X_i = \sum_i Y_i^2 has a chi-squared distribution with n degrees of freedom. - Function: double gsl_ran_chisq (const gsl_rng * R, double NU) This function returns a random variate from the chi-squared distribution with NU degrees of freedom. The distribution function is, p(x) dx = {1 \over 2 \Gamma(\nu/2) } (x/2)^{\nu/2 - 1} \exp(-x/2) dx for x >= 0. - Function: double gsl_ran_chisq_pdf (double X, double NU) This function computes the probability density p(x) at X for a chi-squared distribution with NU degrees of freedom, using the formula given above. - Function: double gsl_cdf_chisq_P (double X, double NU) - Function: double gsl_cdf_chisq_Q (double X, double NU) - Function: double gsl_cdf_chisq_Pinv (double P, double NU) - Function: double gsl_cdf_chisq_Qinv (double Q, double NU) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the chi-squared distribution with NU degrees of freedom.  File: gsl-ref.info, Node: The F-distribution, Next: The t-distribution, Prev: The Chi-squared Distribution, Up: Random Number Distributions The F-distribution ================== The F-distribution arises in statistics. If Y_1 and Y_2 are chi-squared deviates with \nu_1 and \nu_2 degrees of freedom then the ratio, X = { (Y_1 / \nu_1) \over (Y_2 / \nu_2) } has an F-distribution F(x;\nu_1,\nu_2). - Function: double gsl_ran_fdist (const gsl_rng * R, double NU1, double NU2) This function returns a random variate from the F-distribution with degrees of freedom NU1 and NU2. The distribution function is, p(x) dx = { \Gamma((\nu_1 + \nu_2)/2) \over \Gamma(\nu_1/2) \Gamma(\nu_2/2) } \nu_1^{\nu_1/2} \nu_2^{\nu_2/2} x^{\nu_1/2 - 1} (\nu_2 + \nu_1 x)^{-\nu_1/2 -\nu_2/2} for x >= 0. - Function: double gsl_ran_fdist_pdf (double X, double NU1, double NU2) This function computes the probability density p(x) at X for an F-distribution with NU1 and NU2 degrees of freedom, using the formula given above. - Function: double gsl_cdf_fdist_P (double X, double NU1, double NU2) - Function: double gsl_cdf_fdist_Q (double X, double NU1, double NU2) - Function: double gsl_cdf_fdist_Pinv (double P, double NU1, double NU2) - Function: double gsl_cdf_fdist_Qinv (double Q, double NU1, double NU2) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the F-distribution with NU1 and NU2 degrees of freedom.  File: gsl-ref.info, Node: The t-distribution, Next: The Beta Distribution, Prev: The F-distribution, Up: Random Number Distributions The t-distribution ================== The t-distribution arises in statistics. If Y_1 has a normal distribution and Y_2 has a chi-squared distribution with \nu degrees of freedom then the ratio, X = { Y_1 \over \sqrt{Y_2 / \nu} } has a t-distribution t(x;\nu) with \nu degrees of freedom. - Function: double gsl_ran_tdist (const gsl_rng * R, double NU) This function returns a random variate from the t-distribution. The distribution function is, p(x) dx = {\Gamma((\nu + 1)/2) \over \sqrt{\pi \nu} \Gamma(\nu/2)} (1 + x^2/\nu)^{-(\nu + 1)/2} dx for -\infty < x < +\infty. - Function: double gsl_ran_tdist_pdf (double X, double NU) This function computes the probability density p(x) at X for a t-distribution with NU degrees of freedom, using the formula given above. - Function: double gsl_cdf_tdist_P (double X, double NU) - Function: double gsl_cdf_tdist_Q (double X, double NU) - Function: double gsl_cdf_tdist_Pinv (double P, double NU) - Function: double gsl_cdf_tdist_Qinv (double Q, double NU) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the t-distribution with NU degrees of freedom.  File: gsl-ref.info, Node: The Beta Distribution, Next: The Logistic Distribution, Prev: The t-distribution, Up: Random Number Distributions The Beta Distribution ===================== - Function: double gsl_ran_beta (const gsl_rng * R, double A, double B) This function returns a random variate from the beta distribution. The distribution function is, p(x) dx = {\Gamma(a+b) \over \Gamma(a) \Gamma(b)} x^{a-1} (1-x)^{b-1} dx for 0 <= x <= 1. - Function: double gsl_ran_beta_pdf (double X, double A, double B) This function computes the probability density p(x) at X for a beta distribution with parameters A and B, using the formula given above. - Function: double gsl_cdf_beta_P (double X, double A, double B) - Function: double gsl_cdf_beta_Q (double X, double A, double B) - Function: double gsl_cdf_beta_Pinv (double P, double A, double B) - Function: double gsl_cdf_beta_Qinv (double Q, double A, double B) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the beta distribution with parameters A and B.  File: gsl-ref.info, Node: The Logistic Distribution, Next: The Pareto Distribution, Prev: The Beta Distribution, Up: Random Number Distributions The Logistic Distribution ========================= - Function: double gsl_ran_logistic (const gsl_rng * R, double A) This function returns a random variate from the logistic distribution. The distribution function is, p(x) dx = { \exp(-x/a) \over a (1 + \exp(-x/a))^2 } dx for -\infty < x < +\infty. - Function: double gsl_ran_logistic_pdf (double X, double A) This function computes the probability density p(x) at X for a logistic distribution with scale parameter A, using the formula given above. - Function: double gsl_cdf_logistic_P (double X, double A) - Function: double gsl_cdf_logistic_Q (double X, double A) - Function: double gsl_cdf_logistic_Pinv (double P, double A) - Function: double gsl_cdf_logistic_Qinv (double Q, double A) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the logistic distribution with scale parameter A.  File: gsl-ref.info, Node: The Pareto Distribution, Next: Spherical Vector Distributions, Prev: The Logistic Distribution, Up: Random Number Distributions The Pareto Distribution ======================= - Function: double gsl_ran_pareto (const gsl_rng * R, double A, double B) This function returns a random variate from the Pareto distribution of order A. The distribution function is, p(x) dx = (a/b) / (x/b)^{a+1} dx for x >= b. - Function: double gsl_ran_pareto_pdf (double X, double A, double B) This function computes the probability density p(x) at X for a Pareto distribution with exponent A and scale B, using the formula given above. - Function: double gsl_cdf_pareto_P (double X, double A, double B) - Function: double gsl_cdf_pareto_Q (double X, double A, double B) - Function: double gsl_cdf_pareto_Pinv (double P, double A, double B) - Function: double gsl_cdf_pareto_Qinv (double Q, double A, double B) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the Pareto distribution with exponent A and scale B.  File: gsl-ref.info, Node: Spherical Vector Distributions, Next: The Weibull Distribution, Prev: The Pareto Distribution, Up: Random Number Distributions Spherical Vector Distributions ============================== The spherical distributions generate random vectors, located on a spherical surface. They can be used as random directions, for example in the steps of a random walk. - Function: void gsl_ran_dir_2d (const gsl_rng * R, double * X, double * Y) - Function: void gsl_ran_dir_2d_trig_method (const gsl_rng * R, double * X, double * Y) This function returns a random direction vector v = (X,Y) in two dimensions. The vector is normalized such that |v|^2 = x^2 + y^2 = 1. The obvious way to do this is to take a uniform random number between 0 and 2\pi and let X and Y be the sine and cosine respectively. Two trig functions would have been expensive in the old days, but with modern hardware implementations, this is sometimes the fastest way to go. This is the case for the Pentium (but not the case for the Sun Sparcstation). One can avoid the trig evaluations by choosing X and Y in the interior of a unit circle (choose them at random from the interior of the enclosing square, and then reject those that are outside the unit circle), and then dividing by \sqrt{x^2 + y^2}. A much cleverer approach, attributed to von Neumann (See Knuth, v2, 3rd ed, p140, exercise 23), requires neither trig nor a square root. In this approach, U and V are chosen at random from the interior of a unit circle, and then x=(u^2-v^2)/(u^2+v^2) and y=2uv/(u^2+v^2). - Function: void gsl_ran_dir_3d (const gsl_rng * R, double * X, double * Y, double * Z) This function returns a random direction vector v = (X,Y,Z) in three dimensions. The vector is normalized such that |v|^2 = x^2 + y^2 + z^2 = 1. The method employed is due to Robert E. Knop (CACM 13, 326 (1970)), and explained in Knuth, v2, 3rd ed, p136. It uses the surprising fact that the distribution projected along any axis is actually uniform (this is only true for 3 dimensions). - Function: void gsl_ran_dir_nd (const gsl_rng * R, size_t N, double * X) This function returns a random direction vector v = (x_1,x_2,...,x_n) in N dimensions. The vector is normalized such that |v|^2 = x_1^2 + x_2^2 + ... + x_n^2 = 1. The method uses the fact that a multivariate gaussian distribution is spherically symmetric. Each component is generated to have a gaussian distribution, and then the components are normalized. The method is described by Knuth, v2, 3rd ed, p135-136, and attributed to G. W. Brown, Modern Mathematics for the Engineer (1956).  File: gsl-ref.info, Node: The Weibull Distribution, Next: The Type-1 Gumbel Distribution, Prev: Spherical Vector Distributions, Up: Random Number Distributions The Weibull Distribution ======================== - Function: double gsl_ran_weibull (const gsl_rng * R, double A, double B) This function returns a random variate from the Weibull distribution. The distribution function is, p(x) dx = {b \over a^b} x^{b-1} \exp(-(x/a)^b) dx for x >= 0. - Function: double gsl_ran_weibull_pdf (double X, double A, double B) This function computes the probability density p(x) at X for a Weibull distribution with scale A and exponent B, using the formula given above. - Function: double gsl_cdf_weibull_P (double X, double A, double B) - Function: double gsl_cdf_weibull_Q (double X, double A, double B) - Function: double gsl_cdf_weibull_Pinv (double P, double A, double B) - Function: double gsl_cdf_weibull_Qinv (double Q, double A, double B) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the Weibull distribution with scale A and exponent B.  File: gsl-ref.info, Node: The Type-1 Gumbel Distribution, Next: The Type-2 Gumbel Distribution, Prev: The Weibull Distribution, Up: Random Number Distributions The Type-1 Gumbel Distribution ============================== - Function: double gsl_ran_gumbel1 (const gsl_rng * R, double A, double B) This function returns a random variate from the Type-1 Gumbel distribution. The Type-1 Gumbel distribution function is, p(x) dx = a b \exp(-(b \exp(-ax) + ax)) dx for -\infty < x < \infty. - Function: double gsl_ran_gumbel1_pdf (double X, double A, double B) This function computes the probability density p(x) at X for a Type-1 Gumbel distribution with parameters A and B, using the formula given above. - Function: double gsl_cdf_gumbel1_P (double X, double A, double B) - Function: double gsl_cdf_gumbel1_Q (double X, double A, double B) - Function: double gsl_cdf_gumbel1_Pinv (double P, double A, double B) - Function: double gsl_cdf_gumbel1_Qinv (double Q, double A, double B) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the Type-1 Gumbel distribution with parameters A and B.  File: gsl-ref.info, Node: The Type-2 Gumbel Distribution, Next: The Dirichlet Distribution, Prev: The Type-1 Gumbel Distribution, Up: Random Number Distributions The Type-2 Gumbel Distribution ============================== - Function: double gsl_ran_gumbel2 (const gsl_rng * R, double A, double B) This function returns a random variate from the Type-2 Gumbel distribution. The Type-2 Gumbel distribution function is, p(x) dx = a b x^{-a-1} \exp(-b x^{-a}) dx for 0 < x < \infty. - Function: double gsl_ran_gumbel2_pdf (double X, double A, double B) This function computes the probability density p(x) at X for a Type-2 Gumbel distribution with parameters A and B, using the formula given above. - Function: double gsl_cdf_gumbel2_P (double X, double A, double B) - Function: double gsl_cdf_gumbel2_Q (double X, double A, double B) - Function: double gsl_cdf_gumbel2_Pinv (double P, double A, double B) - Function: double gsl_cdf_gumbel2_Qinv (double Q, double A, double B) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the Type-2 Gumbel distribution with parameters A and B.  File: gsl-ref.info, Node: The Dirichlet Distribution, Next: General Discrete Distributions, Prev: The Type-2 Gumbel Distribution, Up: Random Number Distributions The Dirichlet Distribution ========================== - Function: void gsl_ran_dirichlet (const gsl_rng * R, size_t K, const double ALPHA[], double THETA[]) This function returns an array of K random variates from a Dirichlet distribution of order K-1. The distribution function is p(\theta_1, ..., \theta_K) d\theta_1 ... d\theta_K = (1/Z) \prod_{i=1}^K \theta_i^{\alpha_i - 1} \delta(1 -\sum_{i=1}^K \theta_i) d\theta_1 ... d\theta_K for theta_i >= 0 and alpha_i >= 0. The delta function ensures that \sum \theta_i = 1. The normalization factor Z is Z = {\prod_{i=1}^K \Gamma(\alpha_i)} / {\Gamma( \sum_{i=1}^K \alpha_i)} The random variates are generated by sampling K values from gamma distributions with parameters a=alpha_i, b=1, and renormalizing. See A.M. Law, W.D. Kelton, `Simulation Modeling and Analysis' (1991). - Function: double gsl_ran_dirichlet_pdf (size_t K, const double ALPHA[], const double THETA[]) This function computes the probability density p(\theta_1, ... , \theta_K) at THETA[K] for a Dirichlet distribution with parameters ALPHA[K], using the formula given above. - Function: double gsl_ran_dirichlet_lnpdf (size_t K, const double ALPHA[], const double THETA[]) This function computes the logarithm of the probability density p(\theta_1, ... , \theta_K) for a Dirichlet distribution with parameters ALPHA[K].  File: gsl-ref.info, Node: General Discrete Distributions, Next: The Poisson Distribution, Prev: The Dirichlet Distribution, Up: Random Number Distributions General Discrete Distributions ============================== Given K discrete events with different probabilities P[k], produce a random value k consistent with its probability. The obvious way to do this is to preprocess the probability list by generating a cumulative probability array with K+1 elements: C[0] = 0 C[k+1] = C[k]+P[k]. Note that this construction produces C[K]=1. Now choose a uniform deviate u between 0 and 1, and find the value of k such that C[k] <= u < C[k+1]. Although this in principle requires of order \log K steps per random number generation, they are fast steps, and if you use something like \lfloor uK \rfloor as a starting point, you can often do pretty well. But faster methods have been devised. Again, the idea is to preprocess the probability list, and save the result in some form of lookup table; then the individual calls for a random discrete event can go rapidly. An approach invented by G. Marsaglia (Generating discrete random numbers in a computer, Comm ACM 6, 37-38 (1963)) is very clever, and readers interested in examples of good algorithm design are directed to this short and well-written paper. Unfortunately, for large K, Marsaglia's lookup table can be quite large. A much better approach is due to Alastair J. Walker (An efficient method for generating discrete random variables with general distributions, ACM Trans on Mathematical Software 3, 253-256 (1977); see also Knuth, v2, 3rd ed, p120-121,139). This requires two lookup tables, one floating point and one integer, but both only of size K. After preprocessing, the random numbers are generated in O(1) time, even for large K. The preprocessing suggested by Walker requires O(K^2) effort, but that is not actually necessary, and the implementation provided here only takes O(K) effort. In general, more preprocessing leads to faster generation of the individual random numbers, but a diminishing return is reached pretty early. Knuth points out that the optimal preprocessing is combinatorially difficult for large K. This method can be used to speed up some of the discrete random number generators below, such as the binomial distribution. To use it for something like the Poisson Distribution, a modification would have to be made, since it only takes a finite set of K outcomes. - Function: gsl_ran_discrete_t * gsl_ran_discrete_preproc (size_t K, const double * P) This function returns a pointer to a structure that contains the lookup table for the discrete random number generator. The array P[] contains the probabilities of the discrete events; these array elements must all be positive, but they needn't add up to one (so you can think of them more generally as "weights")--the preprocessor will normalize appropriately. This return value is used as an argument for the `gsl_ran_discrete' function below. - Function: size_t gsl_ran_discrete (const gsl_rng * R, const gsl_ran_discrete_t * G) After the preprocessor, above, has been called, you use this function to get the discrete random numbers. - Function: double gsl_ran_discrete_pdf (size_t K, const gsl_ran_discrete_t * G) Returns the probability P[k] of observing the variable K. Since P[k] is not stored as part of the lookup table, it must be recomputed; this computation takes O(K), so if K is large and you care about the original array P[k] used to create the lookup table, then you should just keep this original array P[k] around. - Function: void gsl_ran_discrete_free (gsl_ran_discrete_t * G) De-allocates the lookup table pointed to by G.  File: gsl-ref.info, Node: The Poisson Distribution, Next: The Bernoulli Distribution, Prev: General Discrete Distributions, Up: Random Number Distributions The Poisson Distribution ======================== - Function: unsigned int gsl_ran_poisson (const gsl_rng * R, double MU) This function returns a random integer from the Poisson distribution with mean MU. The probability distribution for Poisson variates is, p(k) = {\mu^k \over k!} \exp(-\mu) for k >= 0. - Function: double gsl_ran_poisson_pdf (unsigned int K, double MU) This function computes the probability p(k) of obtaining K from a Poisson distribution with mean MU, using the formula given above. - Function: double gsl_cdf_poisson_P (unsigned int K, double MU) - Function: double gsl_cdf_poisson_Q (unsigned int K, double MU) These functions compute the cumulative distribution functions P(k), Q(k) for the Poisson distribution with parameter MU.  File: gsl-ref.info, Node: The Bernoulli Distribution, Next: The Binomial Distribution, Prev: The Poisson Distribution, Up: Random Number Distributions The Bernoulli Distribution ========================== - Function: unsigned int gsl_ran_bernoulli (const gsl_rng * R, double P) This function returns either 0 or 1, the result of a Bernoulli trial with probability P. The probability distribution for a Bernoulli trial is, p(0) = 1 - p p(1) = p - Function: double gsl_ran_bernoulli_pdf (unsigned int K, double P) This function computes the probability p(k) of obtaining K from a Bernoulli distribution with probability parameter P, using the formula given above.  File: gsl-ref.info, Node: The Binomial Distribution, Next: The Multinomial Distribution, Prev: The Bernoulli Distribution, Up: Random Number Distributions The Binomial Distribution ========================= - Function: unsigned int gsl_ran_binomial (const gsl_rng * R, double P, unsigned int N) This function returns a random integer from the binomial distribution, the number of successes in N independent trials with probability P. The probability distribution for binomial variates is, p(k) = {n! \over k! (n-k)! } p^k (1-p)^{n-k} for 0 <= k <= n. - Function: double gsl_ran_binomial_pdf (unsigned int K, double P, unsigned int N) This function computes the probability p(k) of obtaining K from a binomial distribution with parameters P and N, using the formula given above. - Function: double gsl_cdf_binomial_P (unsigned int K, double P, unsigned int N) - Function: double gsl_cdf_binomial_Q (unsigned int K, double P, unsigned int N) These functions compute the cumulative distribution functions P(k), Q(k) for the binomial distribution with parameters P and N.  File: gsl-ref.info, Node: The Multinomial Distribution, Next: The Negative Binomial Distribution, Prev: The Binomial Distribution, Up: Random Number Distributions The Multinomial Distribution ============================ - Function: void gsl_ran_multinomial (const gsl_rng * R, size_t K, unsigned int N, const double P[], unsigned int N[]) This function returns an array of K random variates from a multinomial distribution. The distribution function is, P(n_1, n_2, ..., n_K) = (N!/(n_1! n_2! ... n_K!)) p_1^n_1 p_2^n_2 ... p_K^n_K where (n_1, n_2, ..., n_K) are nonnegative integers with sum_{k=1}^K n_k = N, and (p_1, p_2, ..., p_K) is a probability distribution with \sum p_i = 1. If the array P[K] is not normalized then its entries will be treated as weights and normalized appropriately. Random variates are generated using the conditional binomial method (see C.S. David, `The computer generation of multinomial random variates', Comp. Stat. Data Anal. 16 (1993) 205-217 for details). - Function: double gsl_ran_multinomial_pdf (size_t K, const double P[], const unsigned int N[]) This function computes the probability P(n_1, n_2, ..., n_K) of sampling N[K] from a multinomial distribution with parameters P[K], using the formula given above. - Function: double gsl_ran_multinomial_lnpdf (size_t K, const double P[], const unsigned int N[]) This function returns the logarithm of the probability for the multinomial distribution P(n_1, n_2, ..., n_K) with parameters P[K].  File: gsl-ref.info, Node: The Negative Binomial Distribution, Next: The Pascal Distribution, Prev: The Multinomial Distribution, Up: Random Number Distributions The Negative Binomial Distribution ================================== - Function: unsigned int gsl_ran_negative_binomial (const gsl_rng * R, double P, double N) This function returns a random integer from the negative binomial distribution, the number of failures occurring before N successes in independent trials with probability P of success. The probability distribution for negative binomial variates is, p(k) = {\Gamma(n + k) \over \Gamma(k+1) \Gamma(n) } p^n (1-p)^k Note that n is not required to be an integer. - Function: double gsl_ran_negative_binomial_pdf (unsigned int K, double P, double N) This function computes the probability p(k) of obtaining K from a negative binomial distribution with parameters P and N, using the formula given above. - Function: double gsl_cdf_negative_binomial_P (unsigned int K, double P, double N) - Function: double gsl_cdf_negative_binomial_Q (unsigned int K, double P, double N) These functions compute the cumulative distribution functions P(k), Q(k) for the negative binomial distribution with parameters P and N.  File: gsl-ref.info, Node: The Pascal Distribution, Next: The Geometric Distribution, Prev: The Negative Binomial Distribution, Up: Random Number Distributions The Pascal Distribution ======================= - Function: unsigned int gsl_ran_pascal (const gsl_rng * R, double P, unsigned int N) This function returns a random integer from the Pascal distribution. The Pascal distribution is simply a negative binomial distribution with an integer value of n. p(k) = {(n + k - 1)! \over k! (n - 1)! } p^n (1-p)^k for k >= 0 - Function: double gsl_ran_pascal_pdf (unsigned int K, double P, unsigned int N) This function computes the probability p(k) of obtaining K from a Pascal distribution with parameters P and N, using the formula given above. - Function: double gsl_cdf_pascal_P (unsigned int K, double P, unsigned int N) - Function: double gsl_cdf_pascal_Q (unsigned int K, double P, unsigned int N) These functions compute the cumulative distribution functions P(k), Q(k) for the Pascal distribution with parameters P and N.  File: gsl-ref.info, Node: The Geometric Distribution, Next: The Hypergeometric Distribution, Prev: The Pascal Distribution, Up: Random Number Distributions The Geometric Distribution ========================== - Function: unsigned int gsl_ran_geometric (const gsl_rng * R, double P) This function returns a random integer from the geometric distribution, the number of independent trials with probability P until the first success. The probability distribution for geometric variates is, p(k) = p (1-p)^(k-1) for k >= 1. Note that the distribution begins with k=1 with this definition. There is another convention in which the exponent k-1 is replaced by k. - Function: double gsl_ran_geometric_pdf (unsigned int K, double P) This function computes the probability p(k) of obtaining K from a geometric distribution with probability parameter P, using the formula given above. - Function: double gsl_cdf_geometric_P (unsigned int K, double P) - Function: double gsl_cdf_geometric_Q (unsigned int K, double P) These functions compute the cumulative distribution functions P(k), Q(k) for the geometric distribution with parameter P.  File: gsl-ref.info, Node: The Hypergeometric Distribution, Next: The Logarithmic Distribution, Prev: The Geometric Distribution, Up: Random Number Distributions The Hypergeometric Distribution =============================== - Function: unsigned int gsl_ran_hypergeometric (const gsl_rng * R, unsigned int N1, unsigned int N2, unsigned int T) This function returns a random integer from the hypergeometric distribution. The probability distribution for hypergeometric random variates is, p(k) = C(n_1, k) C(n_2, t - k) / C(n_1 + n_2, t) where C(a,b) = a!/(b!(a-b)!) and t <= n_1 + n_2. The domain of k is max(0,t-n_2), ..., min(t,n_1). If a population contains n_1 elements of "type 1" and n_2 elements of "type 2" then the hypergeometric distribution gives the probability of obtaining k elements of "type 1" in t samples from the population without replacement. - Function: double gsl_ran_hypergeometric_pdf (unsigned int K, unsigned int N1, unsigned int N2, unsigned int T) This function computes the probability p(k) of obtaining K from a hypergeometric distribution with parameters N1, N2, T, using the formula given above. - Function: double gsl_cdf_hypergeometric_P (unsigned int K, unsigned int N1, unsigned int N2, unsigned int T) - Function: double gsl_cdf_hypergeometric_Q (unsigned int K, unsigned int N1, unsigned int N2, unsigned int T) These functions compute the cumulative distribution functions P(k), Q(k) for the hypergeometric distribution with parameters N1, N2 and T.  File: gsl-ref.info, Node: The Logarithmic Distribution, Next: Shuffling and Sampling, Prev: The Hypergeometric Distribution, Up: Random Number Distributions The Logarithmic Distribution ============================ - Function: unsigned int gsl_ran_logarithmic (const gsl_rng * R, double P) This function returns a random integer from the logarithmic distribution. The probability distribution for logarithmic random variates is, p(k) = {-1 \over \log(1-p)} {(p^k \over k)} for k >= 1. - Function: double gsl_ran_logarithmic_pdf (unsigned int K, double P) This function computes the probability p(k) of obtaining K from a logarithmic distribution with probability parameter P, using the formula given above.  File: gsl-ref.info, Node: Shuffling and Sampling, Next: Random Number Distribution Examples, Prev: The Logarithmic Distribution, Up: Random Number Distributions Shuffling and Sampling ====================== The following functions allow the shuffling and sampling of a set of objects. The algorithms rely on a random number generator as a source of randomness and a poor quality generator can lead to correlations in the output. In particular it is important to avoid generators with a short period. For more information see Knuth, v2, 3rd ed, Section 3.4.2, "Random Sampling and Shuffling". - Function: void gsl_ran_shuffle (const gsl_rng * R, void * BASE, size_t N, size_t SIZE) This function randomly shuffles the order of N objects, each of size SIZE, stored in the array BASE[0..N-1]. The output of the random number generator R is used to produce the permutation. The algorithm generates all possible n! permutations with equal probability, assuming a perfect source of random numbers. The following code shows how to shuffle the numbers from 0 to 51, int a[52]; for (i = 0; i < 52; i++) { a[i] = i; } gsl_ran_shuffle (r, a, 52, sizeof (int)); - Function: int gsl_ran_choose (const gsl_rng * R, void * DEST, size_t K, void * SRC, size_t N, size_t SIZE) This function fills the array DEST[k] with K objects taken randomly from the N elements of the array SRC[0..N-1]. The objects are each of size SIZE. The output of the random number generator R is used to make the selection. The algorithm ensures all possible samples are equally likely, assuming a perfect source of randomness. The objects are sampled _without_ replacement, thus each object can only appear once in DEST[k]. It is required that K be less than or equal to `n'. The objects in DEST will be in the same relative order as those in SRC. You will need to call `gsl_ran_shuffle(r, dest, n, size)' if you want to randomize the order. The following code shows how to select a random sample of three unique numbers from the set 0 to 99, double a[3], b[100]; for (i = 0; i < 100; i++) { b[i] = (double) i; } gsl_ran_choose (r, a, 3, b, 100, sizeof (double)); - Function: void gsl_ran_sample (const gsl_rng * R, void * DEST, size_t K, void * SRC, size_t N, size_t SIZE) This function is like `gsl_ran_choose' but samples K items from the original array of N items SRC with replacement, so the same object can appear more than once in the output sequence DEST. There is no requirement that K be less than N in this case.  File: gsl-ref.info, Node: Random Number Distribution Examples, Next: Random Number Distribution References and Further Reading, Prev: Shuffling and Sampling, Up: Random Number Distributions Examples ======== The following program demonstrates the use of a random number generator to produce variates from a distribution. It prints 10 samples from the Poisson distribution with a mean of 3. #include #include #include int main (void) { const gsl_rng_type * T; gsl_rng * r; int i, n = 10; double mu = 3.0; /* create a generator chosen by the environment variable GSL_RNG_TYPE */ gsl_rng_env_setup(); T = gsl_rng_default; r = gsl_rng_alloc (T); /* print n random variates chosen from the poisson distribution with mean parameter mu */ for (i = 0; i < n; i++) { unsigned int k = gsl_ran_poisson (r, mu); printf (" %u", k); } printf ("\n"); return 0; } If the library and header files are installed under `/usr/local' (the default location) then the program can be compiled with these options, $ gcc -Wall demo.c -lgsl -lgslcblas -lm Here is the output of the program, $ ./a.out 2 5 5 2 1 0 3 4 1 1 The variates depend on the seed used by the generator. The seed for the default generator type `gsl_rng_default' can be changed with the `GSL_RNG_SEED' environment variable to produce a different stream of variates, $ GSL_RNG_SEED=123 ./a.out GSL_RNG_SEED=123 4 5 6 3 3 1 4 2 5 5 The following program generates a random walk in two dimensions. #include #include #include int main (void) { int i; double x = 0, y = 0, dx, dy; const gsl_rng_type * T; gsl_rng * r; gsl_rng_env_setup(); T = gsl_rng_default; r = gsl_rng_alloc (T); printf ("%g %g\n", x, y); for (i = 0; i < 10; i++) { gsl_ran_dir_2d (r, &dx, &dy); x += dx; y += dy; printf ("%g %g\n", x, y); } return 0; } Here is the output from the program, three 10-step random walks from the origin, The following program computes the upper and lower cumulative distribution functions for the standard normal distribution at x=2. #include #include int main (void) { double P, Q; double x = 2.0; P = gsl_cdf_ugaussian_P (x); printf ("prob(x < %f) = %f\n", x, P); Q = gsl_cdf_ugaussian_Q (x); printf ("prob(x > %f) = %f\n", x, Q); x = gsl_cdf_ugaussian_Pinv (P); printf ("Pinv(%f) = %f\n", P, x); x = gsl_cdf_ugaussian_Qinv (Q); printf ("Qinv(%f) = %f\n", Q, x); return 0; } Here is the output of the program, prob(x < 2.000000) = 0.977250 prob(x > 2.000000) = 0.022750 Pinv(0.977250) = 2.000000 Qinv(0.022750) = 2.000000  File: gsl-ref.info, Node: Random Number Distribution References and Further Reading, Prev: Random Number Distribution Examples, Up: Random Number Distributions References and Further Reading ============================== For an encyclopaedic coverage of the subject readers are advised to consult the book `Non-Uniform Random Variate Generation' by Luc Devroye. It covers every imaginable distribution and provides hundreds of algorithms. Luc Devroye, `Non-Uniform Random Variate Generation', Springer-Verlag, ISBN 0-387-96305-7. The subject of random variate generation is also reviewed by Knuth, who describes algorithms for all the major distributions. Donald E. Knuth, `The Art of Computer Programming: Seminumerical Algorithms' (Vol 2, 3rd Ed, 1997), Addison-Wesley, ISBN 0201896842. The Particle Data Group provides a short review of techniques for generating distributions of random numbers in the "Monte Carlo" section of its Annual Review of Particle Physics. `Review of Particle Properties' R.M. Barnett et al., Physical Review D54, 1 (1996) `http://pdg.lbl.gov/'. The Review of Particle Physics is available online in postscript and pdf format. An overview of methods used to compute cumulative distribution functions can be found in `Statistical Computing' by W.J. Kennedy and J.E. Gentle. Another general reference is `Elements of Statistical Computing' by R.A. Thisted. William E. Kennedy and James E. Gentle, `Statistical Computing' (1980), Marcel Dekker, ISBN 0-8247-6898-1. Ronald A. Thisted, `Elements of Statistical Computing' (1988), Chapman & Hall, ISBN 0-412-01371-1. The cumulative distribution functions for the Gaussian distribution are based on the following papers, `Rational Chebyshev Approximations Using Linear Equations', W.J. Cody, W. Fraser, J.F. Hart. Numerische Mathematik 12, 242-251 (1968). `Rational Chebyshev Approximations for the Error Function', W.J. Cody. Mathematics of Computation 23, n107, 631-637 (July 1969).  File: gsl-ref.info, Node: Statistics, Next: Histograms, Prev: Random Number Distributions, Up: Top Statistics ********** This chapter describes the statistical functions in the library. The basic statistical functions include routines to compute the mean, variance and standard deviation. More advanced functions allow you to calculate absolute deviations, skewness, and kurtosis as well as the median and arbitrary percentiles. The algorithms use recurrence relations to compute average quantities in a stable way, without large intermediate values that might overflow. The functions are available in versions for datasets in the standard floating-point and integer types. The versions for double precision floating-point data have the prefix `gsl_stats' and are declared in the header file `gsl_statistics_double.h'. The versions for integer data have the prefix `gsl_stats_int' and are declared in the header file `gsl_statistics_int.h'. * Menu: * Mean and standard deviation and variance:: * Absolute deviation:: * Higher moments (skewness and kurtosis):: * Autocorrelation:: * Covariance:: * Weighted Samples:: * Maximum and Minimum values:: * Median and Percentiles:: * Example statistical programs:: * Statistics References and Further Reading::  File: gsl-ref.info, Node: Mean and standard deviation and variance, Next: Absolute deviation, Up: Statistics Mean, Standard Deviation and Variance ===================================== - Function: double gsl_stats_mean (const double DATA[], size_t STRIDE, size_t N) This function returns the arithmetic mean of DATA, a dataset of length N with stride STRIDE. The arithmetic mean, or "sample mean", is denoted by \Hat\mu and defined as, \Hat\mu = (1/N) \sum x_i where x_i are the elements of the dataset DATA. For samples drawn from a gaussian distribution the variance of \Hat\mu is \sigma^2 / N. - Function: double gsl_stats_variance (const double DATA[], size_t STRIDE, size_t N) This function returns the estimated, or "sample", variance of DATA, a dataset of length N with stride STRIDE. The estimated variance is denoted by \Hat\sigma^2 and is defined by, \Hat\sigma^2 = (1/(N-1)) \sum (x_i - \Hat\mu)^2 where x_i are the elements of the dataset DATA. Note that the normalization factor of 1/(N-1) results from the derivation of \Hat\sigma^2 as an unbiased estimator of the population variance \sigma^2. For samples drawn from a gaussian distribution the variance of \Hat\sigma^2 itself is 2 \sigma^4 / N. This function computes the mean via a call to `gsl_stats_mean'. If you have already computed the mean then you can pass it directly to `gsl_stats_variance_m'. - Function: double gsl_stats_variance_m (const double DATA[], size_t STRIDE, size_t N, double MEAN) This function returns the sample variance of DATA relative to the given value of MEAN. The function is computed with \Hat\mu replaced by the value of MEAN that you supply, \Hat\sigma^2 = (1/(N-1)) \sum (x_i - mean)^2 - Function: double gsl_stats_sd (const double DATA[], size_t STRIDE, size_t N) - Function: double gsl_stats_sd_m (const double DATA[], size_t STRIDE, size_t N, double MEAN) The standard deviation is defined as the square root of the variance. These functions return the square root of the corresponding variance functions above. - Function: double gsl_stats_variance_with_fixed_mean (const double DATA[], size_t STRIDE, size_t N, double MEAN) This function computes an unbiased estimate of the variance of DATA when the population mean MEAN of the underlying distribution is known _a priori_. In this case the estimator for the variance uses the factor 1/N and the sample mean \Hat\mu is replaced by the known population mean \mu, \Hat\sigma^2 = (1/N) \sum (x_i - \mu)^2 - Function: double gsl_stats_sd_with_fixed_mean (const double DATA[], size_t STRIDE, size_t N, double MEAN) This function calculates the standard deviation of DATA for a fixed population mean MEAN. The result is the square root of the corresponding variance function.  File: gsl-ref.info, Node: Absolute deviation, Next: Higher moments (skewness and kurtosis), Prev: Mean and standard deviation and variance, Up: Statistics Absolute deviation ================== - Function: double gsl_stats_absdev (const double DATA[], size_t STRIDE, size_t N) This function computes the absolute deviation from the mean of DATA, a dataset of length N with stride STRIDE. The absolute deviation from the mean is defined as, absdev = (1/N) \sum |x_i - \Hat\mu| where x_i are the elements of the dataset DATA. The absolute deviation from the mean provides a more robust measure of the width of a distribution than the variance. This function computes the mean of DATA via a call to `gsl_stats_mean'. - Function: double gsl_stats_absdev_m (const double DATA[], size_t STRIDE, size_t N, double MEAN) This function computes the absolute deviation of the dataset DATA relative to the given value of MEAN, absdev = (1/N) \sum |x_i - mean| This function is useful if you have already computed the mean of DATA (and want to avoid recomputing it), or wish to calculate the absolute deviation relative to another value (such as zero, or the median).  File: gsl-ref.info, Node: Higher moments (skewness and kurtosis), Next: Autocorrelation, Prev: Absolute deviation, Up: Statistics Higher moments (skewness and kurtosis) ====================================== - Function: double gsl_stats_skew (const double DATA[], size_t STRIDE, size_t N) This function computes the skewness of DATA, a dataset of length N with stride STRIDE. The skewness is defined as, skew = (1/N) \sum ((x_i - \Hat\mu)/\Hat\sigma)^3 where x_i are the elements of the dataset DATA. The skewness measures the asymmetry of the tails of a distribution. The function computes the mean and estimated standard deviation of DATA via calls to `gsl_stats_mean' and `gsl_stats_sd'. - Function: double gsl_stats_skew_m_sd (const double DATA[], size_t STRIDE, size_t N, double MEAN, double SD) This function computes the skewness of the dataset DATA using the given values of the mean MEAN and standard deviation SD, skew = (1/N) \sum ((x_i - mean)/sd)^3 These functions are useful if you have already computed the mean and standard deviation of DATA and want to avoid recomputing them. - Function: double gsl_stats_kurtosis (const double DATA[], size_t STRIDE, size_t N) This function computes the kurtosis of DATA, a dataset of length N with stride STRIDE. The kurtosis is defined as, kurtosis = ((1/N) \sum ((x_i - \Hat\mu)/\Hat\sigma)^4) - 3 The kurtosis measures how sharply peaked a distribution is, relative to its width. The kurtosis is normalized to zero for a gaussian distribution. - Function: double gsl_stats_kurtosis_m_sd (const double DATA[], size_t STRIDE, size_t N, double MEAN, double SD) This function computes the kurtosis of the dataset DATA using the given values of the mean MEAN and standard deviation SD, kurtosis = ((1/N) \sum ((x_i - mean)/sd)^4) - 3 This function is useful if you have already computed the mean and standard deviation of DATA and want to avoid recomputing them.  File: gsl-ref.info, Node: Autocorrelation, Next: Covariance, Prev: Higher moments (skewness and kurtosis), Up: Statistics Autocorrelation =============== - Function: double gsl_stats_lag1_autocorrelation (const double DATA[], const size_t STRIDE, const size_t N) This function computes the lag-1 autocorrelation of the dataset DATA. a_1 = {\sum_{i = 1}^{n} (x_{i} - \Hat\mu) (x_{i-1} - \Hat\mu) \over \sum_{i = 1}^{n} (x_{i} - \Hat\mu) (x_{i} - \Hat\mu)} - Function: double gsl_stats_lag1_autocorrelation_m (const double DATA[], const size_t STRIDE, const size_t N, const double MEAN) This function computes the lag-1 autocorrelation of the dataset DATA using the given value of the mean MEAN.  File: gsl-ref.info, Node: Covariance, Next: Weighted Samples, Prev: Autocorrelation, Up: Statistics Covariance ========== - Function: double gsl_stats_covariance (const double DATA1[], const size_t STRIDE1, const double DATA2[], const size_t STRIDE2, const size_t N) This function computes the covariance of the datasets DATA1 and DATA2 which must both be of the same length N. covar = (1/(n - 1)) \sum_{i = 1}^{n} (x_i - \Hat x) (y_i - \Hat y) - Function: double gsl_stats_covariance_m (const double DATA1[], const size_t STRIDE1, const double DATA2[], const size_t N, const double MEAN1, const double MEAN2) This function computes the covariance of the datasets DATA1 and DATA2 using the given values of the means, MEAN1 and MEAN2. This is useful if you have already computed the means of DATA1 and DATA2 and want to avoid recomputing them.  File: gsl-ref.info, Node: Weighted Samples, Next: Maximum and Minimum values, Prev: Covariance, Up: Statistics Weighted Samples ================ The functions described in this section allow the computation of statistics for weighted samples. The functions accept an array of samples, x_i, with associated weights, w_i. Each sample x_i is considered as having been drawn from a Gaussian distribution with variance \sigma_i^2. The sample weight w_i is defined as the reciprocal of this variance, w_i = 1/\sigma_i^2. Setting a weight to zero corresponds to removing a sample from a dataset. - Function: double gsl_stats_wmean (const double W[], size_t WSTRIDE, const double DATA[], size_t STRIDE, size_t N) This function returns the weighted mean of the dataset DATA with stride STRIDE and length N, using the set of weights W with stride WSTRIDE and length N. The weighted mean is defined as, \Hat\mu = (\sum w_i x_i) / (\sum w_i) - Function: double gsl_stats_wvariance (const double W[], size_t WSTRIDE, const double DATA[], size_t STRIDE, size_t N) This function returns the estimated variance of the dataset DATA with stride STRIDE and length N, using the set of weights W with stride WSTRIDE and length N. The estimated variance of a weighted dataset is defined as, \Hat\sigma^2 = ((\sum w_i)/((\sum w_i)^2 - \sum (w_i^2))) \sum w_i (x_i - \Hat\mu)^2 Note that this expression reduces to an unweighted variance with the familiar 1/(N-1) factor when there are N equal non-zero weights. - Function: double gsl_stats_wvariance_m (const double W[], size_t WSTRIDE, const double DATA[], size_t STRIDE, size_t N, double WMEAN) This function returns the estimated variance of the weighted dataset DATA using the given weighted mean WMEAN. - Function: double gsl_stats_wsd (const double W[], size_t WSTRIDE, const double DATA[], size_t STRIDE, size_t N) The standard deviation is defined as the square root of the variance. This function returns the square root of the corresponding variance function `gsl_stats_wvariance' above. - Function: double gsl_stats_wsd_m (const double W[], size_t WSTRIDE, const double DATA[], size_t STRIDE, size_t N, double WMEAN) This function returns the square root of the corresponding variance function `gsl_stats_wvariance_m' above. - Function: double gsl_stats_wvariance_with_fixed_mean (const double W[], size_t WSTRIDE, const double DATA[], size_t STRIDE, size_t N, const double MEAN) This function computes an unbiased estimate of the variance of weighted dataset DATA when the population mean MEAN of the underlying distribution is known _a priori_. In this case the estimator for the variance replaces the sample mean \Hat\mu by the known population mean \mu, \Hat\sigma^2 = (\sum w_i (x_i - \mu)^2) / (\sum w_i) - Function: double gsl_stats_wsd_with_fixed_mean (const double W[], size_t WSTRIDE, const double DATA[], size_t STRIDE, size_t N, const double MEAN) The standard deviation is defined as the square root of the variance. This function returns the square root of the corresponding variance function above. - Function: double gsl_stats_wabsdev (const double W[], size_t WSTRIDE, const double DATA[], size_t STRIDE, size_t N) This function computes the weighted absolute deviation from the weighted mean of DATA. The absolute deviation from the mean is defined as, absdev = (\sum w_i |x_i - \Hat\mu|) / (\sum w_i) - Function: double gsl_stats_wabsdev_m (const double W[], size_t WSTRIDE, const double DATA[], size_t STRIDE, size_t N, double WMEAN) This function computes the absolute deviation of the weighted dataset DATA about the given weighted mean WMEAN. - Function: double gsl_stats_wskew (const double W[], size_t WSTRIDE, const double DATA[], size_t STRIDE, size_t N) This function computes the weighted skewness of the dataset DATA. skew = (\sum w_i ((x_i - xbar)/\sigma)^3) / (\sum w_i) - Function: double gsl_stats_wskew_m_sd (const double W[], size_t WSTRIDE, const double DATA[], size_t STRIDE, size_t N, double WMEAN, double WSD) This function computes the weighted skewness of the dataset DATA using the given values of the weighted mean and weighted standard deviation, WMEAN and WSD. - Function: double gsl_stats_wkurtosis (const double W[], size_t WSTRIDE, const double DATA[], size_t STRIDE, size_t N) This function computes the weighted kurtosis of the dataset DATA. kurtosis = ((\sum w_i ((x_i - xbar)/sigma)^4) / (\sum w_i)) - 3 - Function: double gsl_stats_wkurtosis_m_sd (const double W[], size_t WSTRIDE, const double DATA[], size_t STRIDE, size_t N, double WMEAN, double WSD) This function computes the weighted kurtosis of the dataset DATA using the given values of the weighted mean and weighted standard deviation, WMEAN and WSD.  File: gsl-ref.info, Node: Maximum and Minimum values, Next: Median and Percentiles, Prev: Weighted Samples, Up: Statistics Maximum and Minimum values ========================== The following functions find the maximum and minimum values of a dataset (or their indices). If the data contains `NaN's then a `NaN' will be returned, since the maximum or minimum value is undefined. For functions which return an index, the location of the first `NaN' in the array is returned. - Function: double gsl_stats_max (const double DATA[], size_t STRIDE, size_t N) This function returns the maximum value in DATA, a dataset of length N with stride STRIDE. The maximum value is defined as the value of the element x_i which satisfies x_i >= x_j for all j. If you want instead to find the element with the largest absolute magnitude you will need to apply `fabs' or `abs' to your data before calling this function. - Function: double gsl_stats_min (const double DATA[], size_t STRIDE, size_t N) This function returns the minimum value in DATA, a dataset of length N with stride STRIDE. The minimum value is defined as the value of the element x_i which satisfies x_i <= x_j for all j. If you want instead to find the element with the smallest absolute magnitude you will need to apply `fabs' or `abs' to your data before calling this function. - Function: void gsl_stats_minmax (double * MIN, double * MAX, const double DATA[], size_t STRIDE, size_t N) This function finds both the minimum and maximum values MIN, MAX in DATA in a single pass. - Function: size_t gsl_stats_max_index (const double DATA[], size_t STRIDE, size_t N) This function returns the index of the maximum value in DATA, a dataset of length N with stride STRIDE. The maximum value is defined as the value of the element x_i which satisfies x_i >= x_j for all j. When there are several equal maximum elements then the first one is chosen. - Function: size_t gsl_stats_min_index (const double DATA[], size_t STRIDE, size_t N) This function returns the index of the minimum value in DATA, a dataset of length N with stride STRIDE. The minimum value is defined as the value of the element x_i which satisfies x_i >= x_j for all j. When there are several equal minimum elements then the first one is chosen. - Function: void gsl_stats_minmax_index (size_t * MIN_INDEX, size_t * MAX_INDEX, const double DATA[], size_t STRIDE, size_t N) This function returns the indexes MIN_INDEX, MAX_INDEX of the minimum and maximum values in DATA in a single pass.  File: gsl-ref.info, Node: Median and Percentiles, Next: Example statistical programs, Prev: Maximum and Minimum values, Up: Statistics Median and Percentiles ====================== The median and percentile functions described in this section operate on sorted data. For convenience we use "quantiles", measured on a scale of 0 to 1, instead of percentiles (which use a scale of 0 to 100). - Function: double gsl_stats_median_from_sorted_data (const double SORTED_DATA[], size_t STRIDE, size_t N) This function returns the median value of SORTED_DATA, a dataset of length N with stride STRIDE. The elements of the array must be in ascending numerical order. There are no checks to see whether the data are sorted, so the function `gsl_sort' should always be used first. When the dataset has an odd number of elements the median is the value of element (n-1)/2. When the dataset has an even number of elements the median is the mean of the two nearest middle values, elements (n-1)/2 and n/2. Since the algorithm for computing the median involves interpolation this function always returns a floating-point number, even for integer data types. - Function: double gsl_stats_quantile_from_sorted_data (const double SORTED_DATA[], size_t STRIDE, size_t N, double F) This function returns a quantile value of SORTED_DATA, a double-precision array of length N with stride STRIDE. The elements of the array must be in ascending numerical order. The quantile is determined by the F, a fraction between 0 and 1. For example, to compute the value of the 75th percentile F should have the value 0.75. There are no checks to see whether the data are sorted, so the function `gsl_sort' should always be used first. The quantile is found by interpolation, using the formula quantile = (1 - \delta) x_i + \delta x_{i+1} where i is `floor'((n - 1)f) and \delta is (n-1)f - i. Thus the minimum value of the array (`data[0*stride]') is given by F equal to zero, the maximum value (`data[(n-1)*stride]') is given by F equal to one and the median value is given by F equal to 0.5. Since the algorithm for computing quantiles involves interpolation this function always returns a floating-point number, even for integer data types.  File: gsl-ref.info, Node: Example statistical programs, Next: Statistics References and Further Reading, Prev: Median and Percentiles, Up: Statistics Examples ======== Here is a basic example of how to use the statistical functions: #include #include int main(void) { double data[5] = {17.2, 18.1, 16.5, 18.3, 12.6}; double mean, variance, largest, smallest; mean = gsl_stats_mean(data, 1, 5); variance = gsl_stats_variance(data, 1, 5); largest = gsl_stats_max(data, 1, 5); smallest = gsl_stats_min(data, 1, 5); printf ("The dataset is %g, %g, %g, %g, %g\n", data[0], data[1], data[2], data[3], data[4]); printf ("The sample mean is %g\n", mean); printf ("The estimated variance is %g\n", variance); printf ("The largest value is %g\n", largest); printf ("The smallest value is %g\n", smallest); return 0; } The program should produce the following output, The dataset is 17.2, 18.1, 16.5, 18.3, 12.6 The sample mean is 16.54 The estimated variance is 4.2984 The largest value is 18.3 The smallest value is 12.6 Here is an example using sorted data, #include #include #include int main(void) { double data[5] = {17.2, 18.1, 16.5, 18.3, 12.6}; double median, upperq, lowerq; printf ("Original dataset: %g, %g, %g, %g, %g\n", data[0], data[1], data[2], data[3], data[4]); gsl_sort (data, 1, 5); printf ("Sorted dataset: %g, %g, %g, %g, %g\n", data[0], data[1], data[2], data[3], data[4]); median = gsl_stats_median_from_sorted_data (data, 1, 5); upperq = gsl_stats_quantile_from_sorted_data (data, 1, 5, 0.75); lowerq = gsl_stats_quantile_from_sorted_data (data, 1, 5, 0.25); printf ("The median is %g\n", median); printf ("The upper quartile is %g\n", upperq); printf ("The lower quartile is %g\n", lowerq); return 0; } This program should produce the following output, Original dataset: 17.2, 18.1, 16.5, 18.3, 12.6 Sorted dataset: 12.6, 16.5, 17.2, 18.1, 18.3 The median is 17.2 The upper quartile is 18.1 The lower quartile is 16.5  File: gsl-ref.info, Node: Statistics References and Further Reading, Prev: Example statistical programs, Up: Statistics References and Further Reading ============================== The standard reference for almost any topic in statistics is the multi-volume `Advanced Theory of Statistics' by Kendall and Stuart. Maurice Kendall, Alan Stuart, and J. Keith Ord. `The Advanced Theory of Statistics' (multiple volumes) reprinted as `Kendall's Advanced Theory of Statistics'. Wiley, ISBN 047023380X. Many statistical concepts can be more easily understood by a Bayesian approach. The following book by Gelman, Carlin, Stern and Rubin gives a comprehensive coverage of the subject. Andrew Gelman, John B. Carlin, Hal S. Stern, Donald B. Rubin. `Bayesian Data Analysis'. Chapman & Hall, ISBN 0412039915. For physicists the Particle Data Group provides useful reviews of Probability and Statistics in the "Mathematical Tools" section of its Annual Review of Particle Physics. `Review of Particle Properties' R.M. Barnett et al., Physical Review D54, 1 (1996) The Review of Particle Physics is available online at the website `http://pdg.lbl.gov/'.  File: gsl-ref.info, Node: Histograms, Next: N-tuples, Prev: Statistics, Up: Top Histograms ********** This chapter describes functions for creating histograms. Histograms provide a convenient way of summarizing the distribution of a set of data. A histogram consists of a set of "bins" which count the number of events falling into a given range of a continuous variable x. In GSL the bins of a histogram contain floating-point numbers, so they can be used to record both integer and non-integer distributions. The bins can use arbitrary sets of ranges (uniformly spaced bins are the default). Both one and two-dimensional histograms are supported. Once a histogram has been created it can also be converted into a probability distribution function. The library provides efficient routines for selecting random samples from probability distributions. This can be useful for generating simulations based on real data. The functions are declared in the header files `gsl_histogram.h' and `gsl_histogram2d.h'. * Menu: * The histogram struct:: * Histogram allocation:: * Copying Histograms:: * Updating and accessing histogram elements:: * Searching histogram ranges:: * Histogram Statistics:: * Histogram Operations:: * Reading and writing histograms:: * Resampling from histograms:: * The histogram probability distribution struct:: * Example programs for histograms:: * Two dimensional histograms:: * The 2D histogram struct:: * 2D Histogram allocation:: * Copying 2D Histograms:: * Updating and accessing 2D histogram elements:: * Searching 2D histogram ranges:: * 2D Histogram Statistics:: * 2D Histogram Operations:: * Reading and writing 2D histograms:: * Resampling from 2D histograms:: * Example programs for 2D histograms::  File: gsl-ref.info, Node: The histogram struct, Next: Histogram allocation, Up: Histograms The histogram struct ==================== A histogram is defined by the following struct, - Data Type: gsl_histogram `size_t n' This is the number of histogram bins `double * range' The ranges of the bins are stored in an array of N+1 elements pointed to by RANGE. `double * bin' The counts for each bin are stored in an array of N elements pointed to by BIN. The bins are floating-point numbers, so you can increment them by non-integer values if necessary. The range for BIN[i] is given by RANGE[i] to RANGE[i+1]. For n bins there are n+1 entries in the array RANGE. Each bin is inclusive at the lower end and exclusive at the upper end. Mathematically this means that the bins are defined by the following inequality, bin[i] corresponds to range[i] <= x < range[i+1] Here is a diagram of the correspondence between ranges and bins on the number-line for x, [ bin[0] )[ bin[1] )[ bin[2] )[ bin[3] )[ bin[4] ) ---|---------|---------|---------|---------|---------|--- x r[0] r[1] r[2] r[3] r[4] r[5] In this picture the values of the RANGE array are denoted by r. On the left-hand side of each bin the square bracket `[' denotes an inclusive lower bound (r <= x), and the round parentheses `)' on the right-hand side denote an exclusive upper bound (x < r). Thus any samples which fall on the upper end of the histogram are excluded. If you want to include this value for the last bin you will need to add an extra bin to your histogram. The `gsl_histogram' struct and its associated functions are defined in the header file `gsl_histogram.h'.  File: gsl-ref.info, Node: Histogram allocation, Next: Copying Histograms, Prev: The histogram struct, Up: Histograms Histogram allocation ==================== The functions for allocating memory to a histogram follow the style of `malloc' and `free'. In addition they also perform their own error checking. If there is insufficient memory available to allocate a histogram then the functions call the error handler (with an error number of `GSL_ENOMEM') in addition to returning a null pointer. Thus if you use the library error handler to abort your program then it isn't necessary to check every histogram `alloc'. - Function: gsl_histogram * gsl_histogram_alloc (size_t N) This function allocates memory for a histogram with N bins, and returns a pointer to a newly created `gsl_histogram' struct. If insufficient memory is available a null pointer is returned and the error handler is invoked with an error code of `GSL_ENOMEM'. The bins and ranges are not initialized, and should be prepared using one of the range-setting functions below in order to make the histogram ready for use. - Function: int gsl_histogram_set_ranges (gsl_histogram * H, const double RANGE[], size_t SIZE) This function sets the ranges of the existing histogram H using the array RANGE of size SIZE. The values of the histogram bins are reset to zero. The `range' array should contain the desired bin limits. The ranges can be arbitrary, subject to the restriction that they are monotonically increasing. The following example shows how to create a histogram with logarithmic bins with ranges [1,10), [10,100) and [100,1000). gsl_histogram * h = gsl_histogram_alloc (3); /* bin[0] covers the range 1 <= x < 10 */ /* bin[1] covers the range 10 <= x < 100 */ /* bin[2] covers the range 100 <= x < 1000 */ double range[4] = { 1.0, 10.0, 100.0, 1000.0 }; gsl_histogram_set_ranges (h, range, 4); Note that the size of the RANGE array should be defined to be one element bigger than the number of bins. The additional element is required for the upper value of the final bin. - Function: int gsl_histogram_set_ranges_uniform (gsl_histogram * H, double XMIN, double XMAX) This function sets the ranges of the existing histogram H to cover the range XMIN to XMAX uniformly. The values of the histogram bins are reset to zero. The bin ranges are shown in the table below, bin[0] corresponds to xmin <= x < xmin + d bin[1] corresponds to xmin + d <= x < xmin + 2 d ...... bin[n-1] corresponds to xmin + (n-1)d <= x < xmax where d is the bin spacing, d = (xmax-xmin)/n. - Function: void gsl_histogram_free (gsl_histogram * H) This function frees the histogram H and all of the memory associated with it.  File: gsl-ref.info, Node: Copying Histograms, Next: Updating and accessing histogram elements, Prev: Histogram allocation, Up: Histograms Copying Histograms ================== - Function: int gsl_histogram_memcpy (gsl_histogram * DEST, const gsl_histogram * SRC) This function copies the histogram SRC into the pre-existing histogram DEST, making DEST into an exact copy of SRC. The two histograms must be of the same size. - Function: gsl_histogram * gsl_histogram_clone (const gsl_histogram * SRC) This function returns a pointer to a newly created histogram which is an exact copy of the histogram SRC.  File: gsl-ref.info, Node: Updating and accessing histogram elements, Next: Searching histogram ranges, Prev: Copying Histograms, Up: Histograms Updating and accessing histogram elements ========================================= There are two ways to access histogram bins, either by specifying an x coordinate or by using the bin-index directly. The functions for accessing the histogram through x coordinates use a binary search to identify the bin which covers the appropriate range. - Function: int gsl_histogram_increment (gsl_histogram * H, double X) This function updates the histogram H by adding one (1.0) to the bin whose range contains the coordinate X. If X lies in the valid range of the histogram then the function returns zero to indicate success. If X is less than the lower limit of the histogram then the function returns `GSL_EDOM', and none of bins are modified. Similarly, if the value of X is greater than or equal to the upper limit of the histogram then the function returns `GSL_EDOM', and none of the bins are modified. The error handler is not called, however, since it is often necessary to compute histograms for a small range of a larger dataset, ignoring the values outside the range of interest. - Function: int gsl_histogram_accumulate (gsl_histogram * H, double X, double WEIGHT) This function is similar to `gsl_histogram_increment' but increases the value of the appropriate bin in the histogram H by the floating-point number WEIGHT. - Function: double gsl_histogram_get (const gsl_histogram * H, size_t I) This function returns the contents of the I-th bin of the histogram H. If I lies outside the valid range of indices for the histogram then the error handler is called with an error code of `GSL_EDOM' and the function returns 0. - Function: int gsl_histogram_get_range (const gsl_histogram * H, size_t I, double * LOWER, double * UPPER) This function finds the upper and lower range limits of the I-th bin of the histogram H. If the index I is valid then the corresponding range limits are stored in LOWER and UPPER. The lower limit is inclusive (i.e. events with this coordinate are included in the bin) and the upper limit is exclusive (i.e. events with the coordinate of the upper limit are excluded and fall in the neighboring higher bin, if it exists). The function returns 0 to indicate success. If I lies outside the valid range of indices for the histogram then the error handler is called and the function returns an error code of `GSL_EDOM'. - Function: double gsl_histogram_max (const gsl_histogram * H) - Function: double gsl_histogram_min (const gsl_histogram * H) - Function: size_t gsl_histogram_bins (const gsl_histogram * H) These functions return the maximum upper and minimum lower range limits and the number of bins of the histogram H. They provide a way of determining these values without accessing the `gsl_histogram' struct directly. - Function: void gsl_histogram_reset (gsl_histogram * H) This function resets all the bins in the histogram H to zero.  File: gsl-ref.info, Node: Searching histogram ranges, Next: Histogram Statistics, Prev: Updating and accessing histogram elements, Up: Histograms Searching histogram ranges ========================== The following functions are used by the access and update routines to locate the bin which corresponds to a given x coordinate. - Function: int gsl_histogram_find (const gsl_histogram * H, double X, size_t * I) This function finds and sets the index I to the bin number which covers the coordinate X in the histogram H. The bin is located using a binary search. The search includes an optimization for histograms with uniform range, and will return the correct bin immediately in this case. If X is found in the range of the histogram then the function sets the index I and returns `GSL_SUCCESS'. If X lies outside the valid range of the histogram then the function returns `GSL_EDOM' and the error handler is invoked.  File: gsl-ref.info, Node: Histogram Statistics, Next: Histogram Operations, Prev: Searching histogram ranges, Up: Histograms Histogram Statistics ==================== - Function: double gsl_histogram_max_val (const gsl_histogram * H) This function returns the maximum value contained in the histogram bins. - Function: size_t gsl_histogram_max_bin (const gsl_histogram * H) This function returns the index of the bin containing the maximum value. In the case where several bins contain the same maximum value the smallest index is returned. - Function: double gsl_histogram_min_val (const gsl_histogram * H) This function returns the minimum value contained in the histogram bins. - Function: size_t gsl_histogram_min_bin (const gsl_histogram * H) This function returns the index of the bin containing the minimum value. In the case where several bins contain the same maximum value the smallest index is returned. - Function: double gsl_histogram_mean (const gsl_histogram * H) This function returns the mean of the histogrammed variable, where the histogram is regarded as a probability distribution. Negative bin values are ignored for the purposes of this calculation. The accuracy of the result is limited by the bin width. - Function: double gsl_histogram_sigma (const gsl_histogram * H) This function returns the standard deviation of the histogrammed variable, where the histogram is regarded as a probability distribution. Negative bin values are ignored for the purposes of this calculation. The accuracy of the result is limited by the bin width. - Function: double gsl_histogram_sum (const gsl_histogram * H) This function returns the sum of all bin values. Negative bin values are included in the sum.  File: gsl-ref.info, Node: Histogram Operations, Next: Reading and writing histograms, Prev: Histogram Statistics, Up: Histograms Histogram Operations ==================== - Function: int gsl_histogram_equal_bins_p (const gsl_histogram * H1, const gsl_histogram * H2) This function returns 1 if the all of the individual bin ranges of the two histograms are identical, and 0 otherwise. - Function: int gsl_histogram_add (gsl_histogram * H1, const gsl_histogram * H2) This function adds the contents of the bins in histogram H2 to the corresponding bins of histogram H1, i.e. h'_1(i) = h_1(i) + h_2(i). The two histograms must have identical bin ranges. - Function: int gsl_histogram_sub (gsl_histogram * H1, const gsl_histogram * H2) This function subtracts the contents of the bins in histogram H2 from the corresponding bins of histogram H1, i.e. h'_1(i) = h_1(i) - h_2(i). The two histograms must have identical bin ranges. - Function: int gsl_histogram_mul (gsl_histogram * H1, const gsl_histogram * H2) This function multiplies the contents of the bins of histogram H1 by the contents of the corresponding bins in histogram H2, i.e. h'_1(i) = h_1(i) * h_2(i). The two histograms must have identical bin ranges. - Function: int gsl_histogram_div (gsl_histogram * H1, const gsl_histogram * H2) This function divides the contents of the bins of histogram H1 by the contents of the corresponding bins in histogram H2, i.e. h'_1(i) = h_1(i) / h_2(i). The two histograms must have identical bin ranges. - Function: int gsl_histogram_scale (gsl_histogram * H, double SCALE) This function multiplies the contents of the bins of histogram H by the constant SCALE, i.e. h'_1(i) = h_1(i) * scale. - Function: int gsl_histogram_shift (gsl_histogram * H, double OFFSET) This function shifts the contents of the bins of histogram H by the constant OFFSET, i.e. h'_1(i) = h_1(i) + offset.