diff --git a/RK-coeff-opt/oc_albrecht.m b/RK-coeff-opt/oc_albrecht.m index bda0942..dfb4d92 100644 --- a/RK-coeff-opt/oc_albrecht.m +++ b/RK-coeff-opt/oc_albrecht.m @@ -1,9 +1,9 @@ function coneq=oc_albrecht(A,b,c,p) % function coneq=oc_albrecht(A,b,c,p) % -% Order conditions for SSP RK Methods. +% Order conditions for SSP RK methods. % -% This version is based on Albrecht's approach +% This version is based on Albrecht's approach. % ..warning:: % diff --git a/RK-coeff-opt/oc_butcher.m b/RK-coeff-opt/oc_butcher.m index 8aec5a2..0027d3d 100644 --- a/RK-coeff-opt/oc_butcher.m +++ b/RK-coeff-opt/oc_butcher.m @@ -1,7 +1,7 @@ function coneq=oc_butcher(A,b,c,p) % function coneq=oc_butcher(A,b,c,p) % -% Order conditions for RKMs. +% Order conditions for Runge-Kutta methods. % This version is based on Butcher's approach. % % Assumes `p>1`. diff --git a/RK-coeff-opt/rk_obj.m b/RK-coeff-opt/rk_obj.m index ee533bb..e2943f7 100644 --- a/RK-coeff-opt/rk_obj.m +++ b/RK-coeff-opt/rk_obj.m @@ -1,17 +1,18 @@ function [r,g]=rk_obj(x,class,s,p,objective) % function [r,g]=rk_obj(x,class,s,p,objective) +% % Objective function for RK optimization. % -% The meaning of the input arguments is as follow: +% The meaning of the input arguments is as follows: % * :math:`x`: vector of the unknowns. % * class: class of method to search ('erk' = explicit RK; 'irk' = implicit RK; 'dirk' = diagonally implicit RK; 'sdirk' = singly diagonally implicit RK; '2S', '3S', '2S*', '3S*' = low-storage formulations). % * :math:`s`:number of stages. % * :math:`p`: order of the RK scheme. % * objective: objective function ('ssp' = maximize SSP coefficient; 'acc' = minimize leading truncation error coefficient). % -% The meaning of the output arguments is as follow: +% The meaning of the output arguments is as follows: % * r: it is a scalar containing the radius of absolute monotonicity if objective = 'ssp' or the value of the leading truncation error coefficient if objective = 'acc'. -% * g: it is a vector and contains the gradient of the objective function respect to the unknowns. It is an array with all zero elements except for the last component which is equal to one if objective = 'ssp' or it is an empty array if objective = 'acc'. +% * g: a vector containing the gradient of the objective function respect to the unknowns. It is an array with all zero elements except for the last component which is equal to one if objective = 'ssp' or it is an empty array if objective = 'acc'. if strcmp(objective,'ssp') diff --git a/RK-coeff-opt/rk_opt.m b/RK-coeff-opt/rk_opt.m index f27584c..4e2c148 100644 --- a/RK-coeff-opt/rk_opt.m +++ b/RK-coeff-opt/rk_opt.m @@ -22,7 +22,7 @@ % Accuracy optimization is not currently supported for multistep RK methods % * poly_coeff_ind: index of the polynomial coefficients to constrain (`\beta_j`) for `j > p` (j denotes the index of the stage). The default value is an empty array. Note that one should not include any indices `i \le p`, since those are determined by the order conditions. % * poly_coeff_val: constrained values of the polynomial coefficients (`\beta_j`) for `j > p` (tall-tree elementary weights). The default value is an empty array. -% * startvec: vector of the initial guess ('random' = random approach; 'smart' = smart approach; alternatively, the user can provide the startvec array. By default startvec is initialize with random numbers. +% * startvec: vector of the initial guess ('random' = random approach; 'smart' = smart approach; alternatively, the user can provide the startvec array. By default startvec is initialized with random numbers. % * solveorderconditions: if set to 1, solve the order conditions first before trying to optimize. The default value is 0. % * np: number of processor to use. If np `> 1` the MATLAB global optimization toolbox *Multistart* is used. The default value is 1 (just one core). % * num_starting_points: Number of starting points for the global optimization per processor. The default value is 10. @@ -36,7 +36,7 @@ % **numerical experiments have shown that when the objective function is the minimization of the leading truncation error coefficient, the interior-point algorithm performs much better than the sqp one.** % % * display: level of display of fmincon solver ('off', 'iter', 'notify' or 'final'). The default value is 'notify'. -% * problem_class: class of problems for which the RK is designed ('linear' or 'nonlinear' problems). This option changes the type of order conditions check, i.e. linear or nonlinear order conditions controll. The default value is 'nonlinear'. +% * problem_class: class of problems for which the RK is designed ('linear' or 'nonlinear' problems). This option changes the type of order conditions check, i.e. linear or nonlinear order conditions control. The default value is 'nonlinear'. % % % .. note:: diff --git a/RK-coeff-opt/write_field.m b/RK-coeff-opt/write_field.m index 580b694..f9c587c 100644 --- a/RK-coeff-opt/write_field.m +++ b/RK-coeff-opt/write_field.m @@ -1,6 +1,8 @@ -function writeField(writeFid,name,value) -% function writeField(writeFid,name,value) +function write_field(writeFid,name,value) +% function write_field(writeFid,name,value) +% +% +% Utility function to write a single parameter and value. fprintf(writeFid,'\n%s\n',name); fprintf(writeFid, [repmat('%5.16E\t', 1, size(value,2)),'\n'], value'); - diff --git a/RKtools/README.rst b/RKtools/README.rst index 75046b5..a88bcbd 100644 --- a/RKtools/README.rst +++ b/RKtools/README.rst @@ -1 +1,8 @@ Some general utilities for analyzing Runge-Kutta methods. + +Some of the routines expect as input a structured array `rk`. +This structure must have the fields `A, b, c`, containing its +Butcher coefficients. Optionally, it may represent an additive +Runge-Kutta method or an embedded pair in which case it should also have +`Ahat`, `bhat`, `chat` containing the coefficients of the secondary +method. diff --git a/RKtools/am_radius.m b/RKtools/am_radius.m index 5a16932..6d1f1aa 100644 --- a/RKtools/am_radius.m +++ b/RKtools/am_radius.m @@ -5,23 +5,21 @@ % of a Runge-Kutta method, given the Butcher array. % % For an `m`-stage method, `A` should be an `m \times m` matrix -% and `b` should be a column vector of length m. +% and `b` and `c` should be column vectors of length m. % -% Accuracy can be changed by modifying the value of eps (default `10^-{10}`) +% Accuracy can be changed by modifying the value of eps (default `10^{-10}`) % Methods with very large radii of a.m. (>50) will require % the default value of rmax to be increased. % % The radius of absolute monotonicity is the largest value of `r` % such that % -% .. raw:: latex +% \\begin{align*} +% K(I+rA)^{-1} \\ge & 0 \\\\\\ +% rK(I+rA)^{-1}e_m \\le & e_{m+1} +% \\end{align*} % -% \begin{eqnarray} -% K(I+rA)^{-1} \ge & 0 \\ -% rK(I+rA)^{-1}e_m \le & e_{m+1} -% \end{eqnarray} -% -% where $$ K = \left(\begin{array}{c} A \\ b^T \end{array}\right) $$ +% where $$ K = \\left(\\begin{array}{c} A \\\\\\ b^T \\end{array}\\right) $$ if nargin<5 rmax=50; end diff --git a/am_radius-opt/README.rst b/am_radius-opt/README.rst index 5318005..8a611c3 100644 --- a/am_radius-opt/README.rst +++ b/am_radius-opt/README.rst @@ -3,6 +3,6 @@ This includes codes for optimizing stability functions of multistep, multistage methods and even methods with downwinding. Generally, the optimization problem is phrased as a sequence of linear -programming feasibility problems. For details, see [ketcheson2009]_. +programming feasibility problems. For details, see :cite:`2009_monotonicity`. The optimization of rational functions is experimental. diff --git a/am_radius-opt/Rkp.m b/am_radius-opt/Rkp.m index 90e7874..09f8256 100644 --- a/am_radius-opt/Rkp.m +++ b/am_radius-opt/Rkp.m @@ -3,12 +3,13 @@ % % Find the optimal SSP k-step explicit LMM with order of accuracy p. % -% Inputs: -% * `k` = # of steps -% * `p` = order of accuracy +% Inputs: +% * k = # of steps +% * p = order of accuracy % -% Outputs: -% * `\alpha, \beta` = the coefficients of the method +% Outputs: +% * R = the SSP coefficient +% * alpha, beta = the coefficients of the method % % Requires MATLAB's optimization toolbox for the LP solver. diff --git a/am_radius-opt/Rkp_dw.m b/am_radius-opt/Rkp_dw.m index be887b5..2b736ad 100644 --- a/am_radius-opt/Rkp_dw.m +++ b/am_radius-opt/Rkp_dw.m @@ -4,13 +4,16 @@ % Finds the optimal SSP k-step explicit LMM with order of accuracy p % allowing downwind operators % -% Inputs: k = # of steps -% p = order of accuracy +% Inputs: +% * k = # of steps +% * p = order of accuracy % -% Outputs: alpha, beta, tbeta = the coefficients of the method +% Outputs: +% * R = the SSP coefficient +% * alpha, beta, tbeta = the coefficients of the method % % The method is given by -% `u_n = \sum_{j=0}^{k-1} (\alpha[j] + \beta[j] F(u_{n-k+j} + tbeta[j] tF(u_{n-k+j}))` +% `u_n = \sum_{j=0}^{k-1} (\alpha[j] + \beta[j] F(u_{n-k+j}) + tbeta[j] tF(u_{n-k+j}))` % where tF(u) is the negated downwind operator. % % Depends on MATLAB's optimization toolbox for the LP solver diff --git a/am_radius-opt/Rkp_imp.m b/am_radius-opt/Rkp_imp.m index 2f819c1..129e887 100644 --- a/am_radius-opt/Rkp_imp.m +++ b/am_radius-opt/Rkp_imp.m @@ -3,11 +3,13 @@ % % Find the optimal SSP k-step implicit LMM with order of accuracy p % -% Inputs: +% Inputs: % * k = # of steps % * p = order of accuracy % -% Outputs: alpha, beta = the coefficients of the method +% Outputs: +% * R = the SSP coefficient +% * alpha, beta = the coefficients of the method % % Depends on MATLAB's optimization toolbox for the LP solver diff --git a/am_radius-opt/Rkp_imp_dw.m b/am_radius-opt/Rkp_imp_dw.m index 57a4658..508b21b 100644 --- a/am_radius-opt/Rkp_imp_dw.m +++ b/am_radius-opt/Rkp_imp_dw.m @@ -4,10 +4,13 @@ % Finds the optimal k-step implicit LMM with order of accuracy p % allowing downwinding % -% Inputs: k = # of steps -% p = order of accuracy +% Inputs: +% * k = # of steps +% * p = order of accuracy % -% Outputs: alpha, beta, tbeta = the coefficients of the method +% Outputs: +% * R = the SSP coefficient +% * alpha, beta, tbeta = the coefficients of the method % % Depends on MATLAB's optimization toolbox for the LP solver diff --git a/am_radius-opt/Rskp.m b/am_radius-opt/Rskp.m index 2a691ff..1000ff9 100644 --- a/am_radius-opt/Rskp.m +++ b/am_radius-opt/Rskp.m @@ -4,21 +4,21 @@ % Finds the optimal contractive k-step, s-stage GLM with order of accuracy p % for linear problems % -% Inputs: s = # of stages -% k = # of steps -% p = order of accuracy +% Inputs: +% * s = # of stages +% * k = # of steps +% * p = order of accuracy % -% Outputs: -% R = threshold factor -% gamma = coefficients of the polynomials +% Outputs: +% * R = threshold factor +% * gamma = coefficients of the polynomials % % for k=1, the resulting polynomial is % `\sum_{j=0}^m (1+z/R)^j` % -% in general, the resulting stability function is -% (Fill in) +% For details on the general case, see :cite:`2009_monotonicity`. % -%Depends on MATLAB's optimization toolbox for the LP solver +% This routine requires MATLAB's optimization toolbox for the LP solver. %========================================================= %Initialize diff --git a/am_radius-opt/multi_R_opt.m b/am_radius-opt/multi_R_opt.m index 988671a..3cbec3b 100644 --- a/am_radius-opt/multi_R_opt.m +++ b/am_radius-opt/multi_R_opt.m @@ -3,28 +3,21 @@ % % This function is a script to run the routines Rskp, Rkp_dw, Rkp_imp, or % Rkp_imp_dw several times with different inputs, in order to construct tables -% of optimal values like those that appear in [ketcheson2009]_. -% different values of the input parameters, i.e.: -% -% k = [k1, k2, ..., kK]^T, K = length(k), ith-element = # of steps -% p = [p1, p2, ..., pP]^T, P = length(p), ith-element = order of accuracy +% of optimal values like those that appear in :cite:`2009_monotonicity`. % -% and -% -% s = [s1, s2, ..., sS]^T, S = length(s), ith-element = # of stages -% -% when optimal contractive k-step, s-stage GLM are investigated. -% -% The family of method to be considered is specified in the string 'class'. +% The inputs k, p, and (optionally) s should be vectors containing +% the numbers of steps, orders of accuracy, and numbers of stages +% to be considered, respectively. The output includes results for +% all combinations of values from the input vectors. +% +% The family of methods to be considered is specified in the string 'class'. +% Valid values are: % -% Note that in general `S\ne K\ne P`. Fixed the order of accuracy of the time -% integration scheme, one is usually interested in understanding the -% behavior of the threshold factor R as a function of the number of -% stages. Therefore, for a fixed element of the array "p", this function -% loops over the elements of the array "s". Thus, min(s) => max(p). The -% equality holds for any order of accuracy because the number of -% linear order conditions that will be imposed to construct the -% GLM coefficients is p. +% * 'skp': find optimal general linear methods (multistep, multistage). +% In this case the vector s must be included in the inputs. +% * 'kp_imp': find optimal implicit linear multistep methods. +% * 'kp_dw': find optimal explicit downwind linear multistep methods. +% * 'kp_imp_dw': find optimal implicit downwind linear multistep methods. % Parse optional input arguments diff --git a/am_radius-opt/radimpfast.m b/am_radius-opt/radimpfast.m index 3039178..da11356 100644 --- a/am_radius-opt/radimpfast.m +++ b/am_radius-opt/radimpfast.m @@ -1,12 +1,14 @@ function rad=radimpfast(p,q) % function rad=radimpfast(p,q) % -% Compute the radius of absolute monotonicity of a rational function. +% Compute the radius of absolute monotonicity of the rational function +% whose numerator has coefficients p and denominator has coefficients q. +% The coefficients are ordered in ascending powers. % % This function is outdated and needs to be fixed. % -% Uses van de Griend's algorithm, assuming multiplicity one for all roots. -% Uses high precision arithmetic. +% Uses van de Griend's algorithm :cite:`vandegriend1986`, assuming multiplicity +% one for all roots. Uses high precision arithmetic. syms x P Q phi diff --git a/doc/RK-coeff-opt.rst b/doc/RK-coeff-opt.rst index dd97c72..e115530 100644 --- a/doc/RK-coeff-opt.rst +++ b/doc/RK-coeff-opt.rst @@ -3,135 +3,184 @@ ============ RK-coeff-opt ============ -This subpackage contains routines for finding optimal Runge-Kutta method coefficents, +This subpackage contains routines for finding optimal Runge-Kutta method coefficients, given a prescribed order of accuracy, number of stages, and an objective function. Constraints on the stability polynomial (possibly obtained using **polyopt** or **am_radius-opt**) can optionally be provided. +To run the tests, execute the MATLAB commands +``` +results_rkopt = runtests('test_rkopt.m'); +table(results_rkopt) +``` + .. contents:: -check_RK_order +oc_butcher =================================== :: - function p = check_RK_order(A,b,c) + function coneq=oc_butcher(A,b,c,p) -Determines order of a RK method, up to sixth order. -For an s-stage method, input `A` should be a `s \times s` matrix; -`b` and `c` should be column vectors of length `s`. +Order conditions for Runge-Kutta methods. +This version is based on Butcher's approach. +Assumes `p>1`. -errcoeff -=============================== + +rk_opt +=================================================== :: - function D = errcoeff(A,b,c,p) + function rk = rk_opt(s,p,class,objective,varargin) -**Inputs**: - - `A`, `b`, `c` -- Butcher tableau - - `p` -- order of accuracy of the method +Find optimal RK and multistep RK methods. +The meaning of the arguments is as follows: -Computes the norm of the vector of truncation error coefficients -for the terms of order `p+1`: -(elementary weight - 1/(density of the tree)/(symmetry of the tree) + * `s` number of stages. + * `k` number of steps (1 for RK methods) + * `p` order of the Runge-Kutta (RK) scheme. + * class: class of method to search. Available classes: + + * 'erk' : Explicit Runge-Kutta methods + * 'irk' : Implicit Runge-Kutta methods + * 'dirk' : Diagonally implicit Runge-Kutta methods + * 'sdirk' : Singly diagonally implicit Runge-Kutta methods + * '2S', etc. : Low-storage explicit methods; see *Ketcheson, "Runge-Kutta methods with minimum storage implementations". J. Comput. Phys. 229(5):1763 - 1773, 2010*) + * 'emsrk1/2' : Explicit multistep-Runge-Kutta methods + * 'imsrk1/2' : Implicit multistep-Runge-Kutta methods + * 'dimsrk1/2' : Diagonally implicit multistep-Runge-Kutta methods + * objective: objective function ('ssp' = maximize SSP coefficient; 'acc' = minimize leading truncation error coefficient) + Accuracy optimization is not currently supported for multistep RK methods + * poly_coeff_ind: index of the polynomial coefficients to constrain (`\beta_j`) for `j > p` (j denotes the index of the stage). The default value is an empty array. Note that one should not include any indices `i \le p`, since those are determined by the order conditions. + * poly_coeff_val: constrained values of the polynomial coefficients (`\beta_j`) for `j > p` (tall-tree elementary weights). The default value is an empty array. + * startvec: vector of the initial guess ('random' = random approach; 'smart' = smart approach; alternatively, the user can provide the startvec array. By default startvec is initialized with random numbers. + * solveorderconditions: if set to 1, solve the order conditions first before trying to optimize. The default value is 0. + * np: number of processor to use. If np `> 1` the MATLAB global optimization toolbox *Multistart* is used. The default value is 1 (just one core). + * num_starting_points: Number of starting points for the global optimization per processor. The default value is 10. + * writeToFile: whether to write to a file. If set to 1 write the RK coefficients to a file called "ERK-p-s.txt". The default value is 1. + * append_time: whether a timestamp should be added to the output file name + * constrain_emb_stability: a vector of complex points where the embedded method should be stable. Sometimes, fmincon cannot find solutions if emb_poly_coeff_ind,emb_poly_coeff_val are given. In these situations, there are a few parameter combinations where it can be advantageous to ask fmincon to directly constraint the value of the embedded stability function at a few points. In general, the existing approach using polyopt and emb_poly_coeff_ind,emb_poly_coeff_val seems to be better for most problems. + * algorithm: which algorithm to use in fmincon: 'sqp','interior-point', or 'active-set'. By default sqp is used. + * suppress_warnings: whether to suppress all warnings -For now we just use Butcher's approach. We could alternatively use Albrecht's. + .. note:: + **numerical experiments have shown that when the objective function is the minimization of the leading truncation error coefficient, the interior-point algorithm performs much better than the sqp one.** + * display: level of display of fmincon solver ('off', 'iter', 'notify' or 'final'). The default value is 'notify'. + * problem_class: class of problems for which the RK is designed ('linear' or 'nonlinear' problems). This option changes the type of order conditions check, i.e. linear or nonlinear order conditions control. The default value is 'nonlinear'. + .. note:: + Only `s` , `p` , class and objective are required inputs. + All the other arguments are **parameter name - value arguments to the input + parser scheme**. Therefore they can be specified in any order. + **Example**:: -linear_constraints -=================================================================== + >> rk=rk_opt(4,3,'erk','acc','num_starting_points',2,'np',1,'solveorderconditions',1) + >> rk=rk_opt(4,3,'erk','acc','num_starting_points',2,'np',1,'solveorderconditions',1,'np',feature('numcores')) + +The fmincon options are set through the **optimset** that creates/alters optimization options structure. By default the following additional options are used: + * MaxFunEvals = 1000000 + * TolCon = 1.e-13 + * TolFun = 1.e-13 + * TolX = 1.e-13 + * MaxIter = 10000 + * Diagnostics = off + * DerivativeCheck = off + * GradObj = on, if the objective is set equal to 'ssp' + + + +unpack_lsrk +================================================================================= :: - function [Aeq,beq,lb,ub] = linear_constraints(s,class,objective,k) + function [A,b,bhat,c,alpha,beta,gamma1,gamma2,gamma3,delta]=unpack_lsrk(X,class) -This sets up: +Extracts the coefficient arrays from the optimization vector. - * The linear constraints, corresponding to the consistency conditions - `\sum_j b_j = 1` and `\sum_j a_{ij} = c_j`. - * The upper and lower bounds on the unknowns. These are chosen - somewhat arbitrarily, but usually aren't important as long as - they're not too restrictive. +This function also returns the low-storage coefficients. -nonlinear_constraints -================================================================================================== +check_RK_order +=================================== :: - function [con,coneq]=nonlinear_constraints(x,class,s,p,objective,poly_coeff_ind,poly_coeff_val,k) + function p = check_RK_order(A,b,c) -Impose nonlinear constraints: - - if objective = 'ssp' : both order conditions and absolute monotonicity conditions - - if objective = 'acc' : order conditions -The input arguments are: - * :math:`x`: vector of the decision variables. See unpack_rk.m for details about - the order in which they are stored. - * *class*: class of method to search ('erk' = explicit RK; 'irk' = implicit RK; 'dirk' = diagonally implicit RK; 'sdirk' = singly diagonally implicit RK; '2S', '3S', '2S*', '3S*' = low-storage formulations). - * :math:`s`:number of stages. - * :math:`p`: order of the RK scheme. - * *objective*: objective function ('ssp' = maximize SSP coefficient; 'acc' = minimize leading truncation error coefficient). - * *poly_coeff_ind*: index of the polynomial coefficients (:math:`\beta_j`) for :math:`j > p`. - * *poly_coeff_val*: values of the polynomial coefficients (:math:`\beta_j`) for :math:`j > p` (tall-tree elementary weights). +Determines order of a RK method, up to sixth order. -The outputs are: - * *con*: inequality constraints, i.e. absolute monotonicity conditions if objective = 'ssp' or nothing if objective = 'acc' - * *coneq*: order conditions plus stability function coefficients constraints (tall-tree elementary weights) +For an s-stage method, input `A` should be a `s \times s` matrix; +`b` and `c` should be column vectors of length `s`. -Two forms of the order conditions are implemented: one based on **Butcher's -approach**, and one based on **Albrecht's approach**. One or the other may lead -to a more tractable optimization problem in some cases, but this has not been -explored carefully. The Albrecht order conditions are implemented up to order 9, assuming -a certain stage order, while the Butcher order conditions are implemented up to order 9 but -do not assume anything about the stage order. Albrecht's approach is used -by default. +unpack_msrk +============================================================= +:: -oc_albrecht -==================================== + function [A,Ahat,b,bhat,D,theta] = unpack_msrk(X,s,k,class) + + +Extract the coefficient arrays from the optimization vector + + + +errcoeff +=============================== :: - function coneq=oc_albrecht(A,b,c,p) + function D = errcoeff(A,b,c,p) -Order conditions for SSP RK Methods. +**Inputs**: + - `A`, `b`, `c` -- Butcher tableau + - `p` -- order of accuracy of the method -This version is based on Albrecht's approach +Computes the norm of the vector of truncation error coefficients +for the terms of order `p+1`: +(elementary weight - 1/(density of the tree)/(symmetry of the tree) +For now we just use Butcher's approach. We could alternatively use Albrecht's. -oc_butcher -=================================== + + +linear_constraints +=================================================================== :: - function coneq=oc_butcher(A,b,c,p) + function [Aeq,beq,lb,ub] = linear_constraints(s,class,objective,k) -Order conditions for RKMs. -This version is based on Butcher's approach. +This sets up: -Assumes `p>1`. + * The linear constraints, corresponding to the consistency conditions + `\sum_j b_j = 1` and `\sum_j a_{ij} = c_j`. + * The upper and lower bounds on the unknowns. These are chosen + somewhat arbitrarily, but usually aren't important as long as + they're not too restrictive. -oc_ksrk -======================================= +set_n +========================== :: - function coneq= oc_ksrk(A,b,D,theta,p) + function n=set_n(s,class) -Order conditions for multistep-RK methods. +Set total number of decision variables @@ -146,98 +195,68 @@ This is just a small wrapper, used when solveorderconditions=1. -rk_obj -============================================= +oc_ksrk +======================================= :: - function [r,g]=rk_obj(x,class,s,p,objective) + function coneq= oc_ksrk(A,b,D,theta,p) -Objective function for RK optimization. +Order conditions for multistep-RK methods. -The meaning of the input arguments is as follow: - * :math:`x`: vector of the unknowns. - * class: class of method to search ('erk' = explicit RK; 'irk' = implicit RK; 'dirk' = diagonally implicit RK; 'sdirk' = singly diagonally implicit RK; '2S', '3S', '2S*', '3S*' = low-storage formulations). - * :math:`s`:number of stages. - * :math:`p`: order of the RK scheme. - * objective: objective function ('ssp' = maximize SSP coefficient; 'acc' = minimize leading truncation error coefficient). +..warning:: -The meaning of the output arguments is as follow: - * r: it is a scalar containing the radius of absolute monotonicity if objective = 'ssp' or the value of the leading truncation error coefficient if objective = 'acc'. - * g: it is a vector and contains the gradient of the objective function respect to the unknowns. It is an array with all zero elements except for the last component which is equal to one if objective = 'ssp' or it is an empty array if objective = 'acc'. + Here we assume a certain minimum stage order, + which is necessarily true for methods with + strictly positive abscissae (b>0). + This assumption dramatically reduces the + number of order conditions that must be + considered for high-order methods. + For methods that do not satisfy b>0, this + assumption may be unnecessarily restrictive. -rk_opt -=================================================== +write_field(writeFid,name,value) +========================================== :: - function rk = rk_opt(s,p,class,objective,varargin) + function write_field(writeFid,name,value) -Find optimal RK and multistep RK methods. -The meaning of the arguments is as follows: - * `s` number of stages. - * `k` number of steps (1 for RK methods) - * `p` order of the Runge-Kutta (RK) scheme. - * class: class of method to search. Available classes: +Utility function to write a single parameter and value. - * 'erk' : Explicit Runge-Kutta methods - * 'irk' : Implicit Runge-Kutta methods - * 'dirk' : Diagonally implicit Runge-Kutta methods - * 'sdirk' : Singly diagonally implicit Runge-Kutta methods - * '2S', etc. : Low-storage explicit methods; see *Ketcheson, "Runge-Kutta methods with minimum storage implementations". J. Comput. Phys. 229(5):1763 - 1773, 2010*) - * 'emsrk1/2' : Explicit multistep-Runge-Kutta methods - * 'imsrk1/2' : Implicit multistep-Runge-Kutta methods - * 'dimsrk1/2' : Diagonally implicit multistep-Runge-Kutta methods - * objective: objective function ('ssp' = maximize SSP coefficient; 'acc' = minimize leading truncation error coefficient) - Accuracy optimization is not currently supported for multistep RK methods - * poly_coeff_ind: index of the polynomial coefficients to constrain (`\beta_j`) for `j > p` (j denotes the index of the stage). The default value is an empty array. Note that one should not include any indices `i \le p`, since those are determined by the order conditions. - * poly_coeff_val: constrained values of the polynomial coefficients (`\beta_j`) for `j > p` (tall-tree elementary weights). The default value is an empty array. - * startvec: vector of the initial guess ('random' = random approach; 'smart' = smart approach; alternatively, the user can provide the startvec array. By default startvec is initialize with random numbers. - * solveorderconditions: if set to 1, solve the order conditions first before trying to optimize. The default value is 0. - * np: number of processor to use. If np `> 1` the MATLAB global optimization toolbox *Multistart* is used. The default value is 1 (just one core). - * max_tries: maximum number of fmincon function calls. The default value is 10. - * writeToFile: whether to write to a file. If set to 1 write the RK coefficients to a file called "ERK-p-s.txt". The default value is 1. - * algorithm: which algorithm to use in fmincon: 'sqp','interior-point', or 'active-set'. By default sqp is used. - .. note:: - **numerical experiments have shown that when the objective function is the minimization of the leading truncation error coefficient, the interior-point algorithm performs much better than the sqp one.** +oc_albrecht +==================================== +:: - * display: level of display of fmincon solver ('off', 'iter', 'notify' or 'final'). The default value is 'notify'. - * problem_class: class of problems for which the RK is designed ('linear' or 'nonlinear' problems). This option changes the type of order conditions check, i.e. linear or nonlinear order conditions controll. The default value is 'nonlinear'. + function coneq=oc_albrecht(A,b,c,p) - .. note:: +Order conditions for SSP RK methods. - Only `s` , `p` , class and objective are required inputs. - All the other arguments are **parameter name - value arguments to the input - parser scheme**. Therefore they can be specified in any order. +This version is based on Albrecht's approach. - **Example**:: - >> rk=rk_opt(4,3,'erk','acc','max_tries',2,'np',1,'solveorderconditions',1) -The fmincon options are set through the **optimset** that creates/alters optimization options structure. By default the following additional options are used: - * MaxFunEvals = 1000000 - * TolCon = 1.e-13 - * TolFun = 1.e-13 - * TolX = 1.e-13 - * MaxIter = 10000 - * Diagnostics = off - * DerivativeCheck = off - * GradObj = on, if the objective is set equal to 'ssp' +unpack_rk +==================================================== +:: + unction [A,b,c,Ahat,bhat,chat]=unpack_rk(X,s,class) -set_n -========================== -:: +Extracts the coefficient arrays from the optimization vector. - function n=set_n(s,class) +The coefficients are stored in a single vector x as:: -Set total number of decision variables + x=[A b' c'] + +A is stored row-by-row. + +Low-storage methods are stored in other ways as detailed inline below. @@ -251,57 +270,68 @@ shuosher2butcher Generate Butcher form of a Runge-Kutta method, given its Shu-Osher or modified Shu-Osher form. -For an m-stage method, `\alpha` and `\beta` should be +For an m-stage method, `\alpha` and `\beta` should be matrices of dimension `(m+1) \times m`. -unpack_lsrk -=================================================================================== +nonlinear_constraints +================================================================================================================================================================ :: - function [A,b,bhat,c,alpha,beta,gamma1,gamma2,gamma3,delta]=unpack_lsrk(X,class) + function [con,coneq]=nonlinear_constraints(x,class,s,p,objective,poly_coeff_ind,poly_coeff_val,k,emb_poly_coeff_ind,emb_poly_coeff_val,constrain_emb_stability) +Impose nonlinear constraints: + - if objective = 'ssp' : both order conditions and absolute monotonicity conditions + - if objective = 'acc' : order conditions +The input arguments are: + * :math:`x`: vector of the decision variables. See unpack_rk.m for details about + the order in which they are stored. + * *class*: class of method to search ('erk' = explicit RK; 'irk' = implicit RK; 'dirk' = diagonally implicit RK; 'sdirk' = singly diagonally implicit RK; '2S', '3S', '2S*', '3S*' = low-storage formulations). + * :math:`s`:number of stages. + * :math:`p`: order of the RK scheme. + * *objective*: objective function ('ssp' = maximize SSP coefficient; 'acc' = minimize leading truncation error coefficient). + * *poly_coeff_ind*: index of the polynomial coefficients (:math:`\beta_j`) for :math:`j > p`. + * *poly_coeff_val*: values of the polynomial coefficients (:math:`\beta_j`) for :math:`j > p` (tall-tree elementary weights). + * :math:`k`: Number of steps for multi-step, mlti-stage schemes. + * *emb_poly_coeff_ind*: index of the polynomial coefficients of the embedded scheme (:math:`\beta_j`) for :math:`j > p`. + * *emb_poly_coeff_val*: values of the polynomial coefficients of the embedded scheme (:math:`\beta_j`) for :math:`j > p` (tall-tree elementary weights). -Extracts the coefficient arrays from the optimization vector. - -This function also returns the low-storage coefficients. - - - -unpack_msrk -============================================================= -:: - - function [A,Ahat,b,bhat,D,theta] = unpack_msrk(X,s,k,class) - +The outputs are: + * *con*: inequality constraints, i.e. absolute monotonicity conditions if objective = 'ssp' or nothing if objective = 'acc' + * *coneq*: order conditions plus stability function coefficients constraints (tall-tree elementary weights) -Extract the coefficient arrays from the optimization vector +Two forms of the order conditions are implemented: one based on **Butcher's +approach**, and one based on **Albrecht's approach**. One or the other may lead +to a more tractable optimization problem in some cases, but this has not been +explored carefully. The Albrecht order conditions are implemented up to order 9, assuming +a certain stage order, while the Butcher order conditions are implemented up to order 9 but +do not assume anything about the stage order. Albrecht's approach is used +by default. -unpack_rk -====================================== +rk_obj +============================================= :: - function [A,b,c]=unpack_rk(X,s,class) - - -Extracts the coefficient arrays from the optimization vector. - -The coefficients are tored in a single vector x as:: + function [r,g]=rk_obj(x,class,s,p,objective) - x=[A b' c'] -A is stored row-by-row. +Objective function for RK optimization. +The meaning of the input arguments is as follows: + * :math:`x`: vector of the unknowns. + * class: class of method to search ('erk' = explicit RK; 'irk' = implicit RK; 'dirk' = diagonally implicit RK; 'sdirk' = singly diagonally implicit RK; '2S', '3S', '2S*', '3S*' = low-storage formulations). + * :math:`s`:number of stages. + * :math:`p`: order of the RK scheme. + * objective: objective function ('ssp' = maximize SSP coefficient; 'acc' = minimize leading truncation error coefficient). +The meaning of the output arguments is as follows: + * r: it is a scalar containing the radius of absolute monotonicity if objective = 'ssp' or the value of the leading truncation error coefficient if objective = 'acc'. + * g: a vector containing the gradient of the objective function respect to the unknowns. It is an array with all zero elements except for the last component which is equal to one if objective = 'ssp' or it is an empty array if objective = 'acc'. -writeField -============================================ -:: - function wf=writeField(writeFid,name,value) diff --git a/doc/RKtools.rst b/doc/RKtools.rst index c220f8e..df963eb 100644 --- a/doc/RKtools.rst +++ b/doc/RKtools.rst @@ -9,37 +9,6 @@ Some general utilities for analyzing Runge-Kutta methods. .. contents:: -am_radius -======================================= -:: - - function r = am_radius(A,b,c,eps,rmax) - - -Evaluates the Radius of absolute monotonicity -of a Runge-Kutta method, given the Butcher array. - -For an `m`-stage method, `A` should be an `m \times m` matrix -and `b` should be a column vector of length m. - -Accuracy can be changed by modifying the value of eps (default `10^-{10}`) -Methods with very large radii of a.m. (>50) will require -the default value of rmax to be increased. - -The radius of absolute monotonicity is the largest value of `r` -such that - -.. raw:: latex - - \begin{eqnarray} - K(I+rA)^{-1} \ge & 0 \\ - rK(I+rA)^{-1}e_m \le & e_{m+1} - \end{eqnarray} - - where $$ K = \left(\begin{array}{c} A \\ b^T \end{array}\right) $$ - - - internal_stab_explicit_butcher ==================================================================================== :: @@ -66,21 +35,37 @@ Runge-Kuatta method. -L2_timestep_poly -================================================= +plotstabreg_func +=============================================================== :: - function c = L2_timestep_poly(sdisc,p,q,eps,tol) + function [contour_matrix] = plotstabreg_func(p,q,bounds,ls,lw) -Find the absolutely timestep for a given combination of -linear spatial discretization and stability function. +plot the absolute stability region of a one-step method, +given the stability function -Also (optionally) plots the region of absolute stability and the eigenvalues. +Inputs: + * p: coefficients of the numerator of the stability function + * q: coefficients of the denominator of the stability function -The timestep is determined to within accuracy eps (default 10^-4). + if q is omitted, it is assumed that the function is a polynomial +Remaining inputs are optional: + * bounds: bounds for region to compute and plot (default [-9 1 -5 5]) + * ls: line style (default '-r') + * lw: line width (default 2) -The spectral stability condition is checked to within tol (default 10^-13). + + +plotstabreg +============================================================= +:: + + function [contour_matrix] = plotstabreg(rk,plotbounds,ls,lw) + + +Plots the absolute stability region +of a Runge-Kutta method, given the Butcher array @@ -93,37 +78,40 @@ optimal_shuosher_form -plotstabreg(rk,plotbounds,ls,lw) -========================================== +L2_timestep_poly +================================================= :: - function plotstabreg(rk,plotbounds,ls,lw) + function c = L2_timestep_poly(sdisc,p,q,eps,tol) -Plots the absolute stability region -of a Runge-Kutta method, given the Butcher array +Find the absolutely timestep for a given combination of +linear spatial discretization and stability function. +Also (optionally) plots the region of absolute stability and the eigenvalues. +The timestep is determined to within accuracy eps (default 10^-4). -plotstabreg_func +The spectral stability condition is checked to within tol (default 10^-13). + + + +semispectrum ====================================================== :: - function [dummy] = plotstabreg_func(p,q,bounds,ls,lw) - + function L = semispectrum(method,order,doplot,nx,cfl) -plot the absolute stability region of a one-step method, -given the stability function +Plot spectra of various semi-discretizations of the advection equation -Inputs: - * p: coefficients of the numerator of the stability function - * q: coefficients of the denominator of the stability function +Current choices for method: + - 'fourier': Fourier spectral method + - 'chebyshev': Chebyshev spectral method + - 'updiff': Upwind difference operators (linearized WENO) + - 'DG': Discontinuous Galerkin method - if q is omitted, it is assumed that the function is a polynomial -Remaining inputs are optional: - * bounds: bounds for region to compute and plot (default [-9 1 -5 5]) - * ls: line style (default '-r') - * lw: line width (default 2) +The value of order matters only for the 'updiff' and 'DG' methods +and selects the order of accuracy in those cases. @@ -148,22 +136,32 @@ q contains the coefficients of the denominator -semispectrum -====================================================== +am_radius +======================================= :: - function L = semispectrum(method,order,doplot,nx,cfl) + function r = am_radius(A,b,c,eps,rmax) -Plot spectra of various semi-discretizations of the advection equation -Current choices for method: - - 'fourier': Fourier spectral method - - 'chebyshev': Chebyshev spectral method - - 'updiff': Upwind difference operators (linearized WENO) - - 'DG': Discontinuous Galerkin method +Evaluates the Radius of absolute monotonicity +of a Runge-Kutta method, given the Butcher array. -The value of order matters only for the 'updiff' and 'DG' methods -and selects the order of accuracy in those cases. +For an `m`-stage method, `A` should be an `m \times m` matrix +and `b` and `c` should be column vectors of length m. + +Accuracy can be changed by modifying the value of eps (default `10^{-10}`) +Methods with very large radii of a.m. (>50) will require +the default value of rmax to be increased. + +The radius of absolute monotonicity is the largest value of `r` +such that + + \\begin{align*} + K(I+rA)^{-1} \\ge & 0 \\\\\\ + rK(I+rA)^{-1}e_m \\le & e_{m+1} + \\end{align*} + + where $$ K = \\left(\\begin{array}{c} A \\\\\\ b^T \\end{array}\\right) $$ diff --git a/doc/am_radius-opt.rst b/doc/am_radius-opt.rst index 3e89ebe..bae4cc7 100644 --- a/doc/am_radius-opt.rst +++ b/doc/am_radius-opt.rst @@ -16,9 +16,6 @@ The optimization of rational functions is experimental. .. contents:: - - - multi_R_opt =================================================== :: @@ -29,27 +26,41 @@ multi_R_opt This function is a script to run the routines Rskp, Rkp_dw, Rkp_imp, or Rkp_imp_dw several times with different inputs, in order to construct tables of optimal values like those that appear in :cite:`2009_monotonicity`. -different values of the input parameters, i.e.: -k = [k1, k2, ..., kK]^T, K = length(k), ith-element = # of steps -p = [p1, p2, ..., pP]^T, P = length(p), ith-element = order of accuracy +The inputs k, p, and (optionally) s should be vectors containing +the numbers of steps, orders of accuracy, and numbers of stages +to be considered, respectively. The output includes results for +all combinations of values from the input vectors. + +The family of methods to be considered is specified in the string 'class'. +Valid values are: + + * 'skp': find optimal general linear methods (multistep, multistage). + In this case the vector s must be included in the inputs. + * 'kp_imp': find optimal implicit linear multistep methods. + * 'kp_dw': find optimal explicit downwind linear multistep methods. + * 'kp_imp_dw': find optimal implicit downwind linear multistep methods. + + + +Rkp +================================= +:: -and + function [R,alpha,beta]=Rkp(k,p) -s = [s1, s2, ..., sS]^T, S = length(s), ith-element = # of stages -when optimal contractive k-step, s-stage GLM are investigated. +Find the optimal SSP k-step explicit LMM with order of accuracy p. -The family of method to be considered is specified in the string 'class'. +Inputs: + * k = # of steps + * p = order of accuracy -Note that in general `S\ne K\ne P`. Fixed the order of accuracy of the time - integration scheme, one is usually interested in understanding the - behavior of the threshold factor R as a function of the number of - stages. Therefore, for a fixed element of the array "p", this function - loops over the elements of the array "s". Thus, min(s) => max(p). The - equality holds for any order of accuracy because the number of - linear order conditions that will be imposed to construct the - GLM coefficients is p. +Outputs: + * R = the SSP coefficient + * alpha, beta = the coefficients of the method + +Requires MATLAB's optimization toolbox for the LP solver. @@ -60,32 +71,41 @@ radimpfast function rad=radimpfast(p,q) -Compute the radius of absolute monotonicity of a rational function. +Compute the radius of absolute monotonicity of the rational function +whose numerator has coefficients p and denominator has coefficients q. +The coefficients are ordered in ascending powers. This function is outdated and needs to be fixed. -Uses van de Griend's algorithm, assuming multiplicity one for all roots. -Uses high precision arithmetic. +Uses van de Griend's algorithm :cite:`vandegriend1986`, assuming multiplicity +one for all roots. Uses high precision arithmetic. -Rkp -================================= +Rskp +=============================== :: - function [R,alpha,beta]=Rkp(k,p) + function [R,gamma]=Rskp(s,k,p) -Find the optimal SSP k-step explicit LMM with order of accuracy p. +Finds the optimal contractive k-step, s-stage GLM with order of accuracy p +for linear problems -Inputs: - * `k` = # of steps - * `p` = order of accuracy +Inputs: s = # of stages + k = # of steps + p = order of accuracy Outputs: - * `\alpha, \beta` = the coefficients of the method + R = threshold factor + gamma = coefficients of the polynomials + + for k=1, the resulting polynomial is + `\sum_{j=0}^m (1+z/R)^j` -Requires MATLAB's optimization toolbox for the LP solver. + For details on the general case, see :cite:`2009_monotonicity`. + +This routine requires MATLAB's optimization toolbox for the LP solver. @@ -99,13 +119,16 @@ Rkp_dw Finds the optimal SSP k-step explicit LMM with order of accuracy p allowing downwind operators -Inputs: k = # of steps - p = order of accuracy +Inputs: + * k = # of steps + * p = order of accuracy -Outputs: alpha, beta, tbeta = the coefficients of the method +Outputs: + * R = the SSP coefficient + * alpha, beta, tbeta = the coefficients of the method The method is given by -`u_n = \sum_{j=0}^{k-1} (\alpha[j] + \beta[j] F(u_{n-k+j} + tbeta[j] tF(u_{n-k+j}))` +`u_n = \sum_{j=0}^{k-1} (\alpha[j] + \beta[j] F(u_{n-k+j}) + tbeta[j] tF(u_{n-k+j}))` where tF(u) is the negated downwind operator. Depends on MATLAB's optimization toolbox for the LP solver @@ -125,7 +148,9 @@ Inputs: * k = # of steps * p = order of accuracy -Outputs: alpha, beta = the coefficients of the method +Outputs: + * R = the SSP coefficient + * alpha, beta = the coefficients of the method Depends on MATLAB's optimization toolbox for the LP solver @@ -141,40 +166,18 @@ Rkp_imp_dw Finds the optimal k-step implicit LMM with order of accuracy p allowing downwinding -Inputs: k = # of steps - p = order of accuracy +Inputs: + * k = # of steps + * p = order of accuracy -Outputs: alpha, beta, tbeta = the coefficients of the method +Outputs: + * R = the SSP coefficient + * alpha, beta, tbeta = the coefficients of the method Depends on MATLAB's optimization toolbox for the LP solver -Rskp -=============================== -:: - - function [R,gamma]=Rskp(s,k,p) - - -Finds the optimal contractive k-step, s-stage GLM with order of accuracy p -for linear problems - -Inputs: s = # of stages - k = # of steps - p = order of accuracy - -Outputs: - R = threshold factor - gamma = coefficients of the polynomials - - for k=1, the resulting polynomial is - `\sum_{j=0}^m (1+z/R)^j` - - in general, the resulting stability function is - (Fill in) - -epends on MATLAB's optimization toolbox for the LP solver diff --git a/doc/conf.py b/doc/conf.py index 94fc7eb..ef49afc 100644 --- a/doc/conf.py +++ b/doc/conf.py @@ -233,4 +233,4 @@ #intersphinx_mapping = {'http://docs.python.org/': None} #jsmath_path = 'http://numerics.kaust.edu.sa/jsMath/easy/load.js' jsmath_path = '/Users/ketch/Software/jsMath/easy/load.js' -keep_warnings = 'True' +keep_warnings = True diff --git a/doc/m2rst.py b/doc/m2rst.py index 60c459e..1d9a319 100644 --- a/doc/m2rst.py +++ b/doc/m2rst.py @@ -26,7 +26,7 @@ def extract_matlab_docstring(mfile): return docstring+'\n' return docstring+'\n' -def compile_docstrings(directory,rstfile): +def compile_docstrings(directory,rstfile,skip_tests=True): """Write all the docstrings from directory to rstfile.""" import os output=open(rstfile,'w') @@ -46,7 +46,7 @@ def compile_docstrings(directory,rstfile): output.write('\n.. contents::\n\n') for fname in os.listdir(directory): - if fname[-2:]=='.m': + if fname[-2:]=='.m' and not ('test' in fname.lower()): docstring = extract_matlab_docstring(os.path.relpath(directory+fname)) output.write(docstring) output.write('\n\n') diff --git a/doc/polyopt.rst b/doc/polyopt.rst index fbb6f24..8faface 100644 --- a/doc/polyopt.rst +++ b/doc/polyopt.rst @@ -5,23 +5,48 @@ polyopt ======= Given a spectrum (typically corresponding to a spatial semi-discretization of a PDE), finds an optimal stability polynomial. The -polynomial coefficients can then be used as input to RK-coeff-opt to find a +polynomial coefficients can then be used as input to `RK-coeff-opt` to find a corresponding Runge-Kutta method. This is the implementation of the algorithm described in :cite:`2012_optimal_stability_polynomials`. The code was written by Aron Ahmadia and David Ketcheson. +To run the tests, execute the MATLAB commands +``` +results_polyopt = runtests('test_polyopt.m'); +table(results_polyopt) +``` + .. contents:: +spectrum +============================================= +:: + + function lamda = spectrum(name,N,kappa,beta) + + +Return N discretely sampled values from certain sets in the complex plane. + +Acceptable values for name: + * 'realaxis': `[-1,0]` + * 'imagaxis': `[-i,i]` + * 'disk': `{z : |z+1|=1}` + * 'rectangle': `{x+iy : -\beta \le y \le \beta, -\kappa \le x \le 0}` + * 'Niegemann-ellipse' and 'Niegemann-circle': See :cite:`niegemann2011` + * 'gap': Spectrum with a gap; see :cite:`2012_optimal_stability_polynomials` + +kappa and beta are used only if name == 'rectangle' + opt_poly_bisect -================================================================== +============================================================================== :: - function [h,poly_coeff] = opt_poly_bisect(lam,s,p,basis,varargin) + function [h,poly_coeff,diag_bisect] = opt_poly_bisect(lam,s,p,basis,varargin) Finds an optimally stable polynomial of degree s and order p for the spectrum @@ -29,21 +54,28 @@ lam in the interval (h_min,h_max) to precision eps. Optional arguments: - lam_func: + + solvers: + A cell array of cvx solver names that should be used to + solve the convex problem in the inner loop. Defaults to + {'sdpt3', 'sedumi'}. You can also add 'mosek' and + 'gurobi' if you have obtained (free academic) licences + for these and installed them in cvx. + lam_func: A function used to generate the appropriate spectrum at each bisection step, instead of using a fixed (scaled) spectrum. Used for instance to find the longest rectangle of a fixed height - (see Figure 10 of :cite:`2012_optimal_stability_polynomials`). + (see Figure 10 of the CAMCoS paper). Examples: - To find negative real axis inclusion:: - lam = spectrum('realaxis',500); + lam = spectrum('realaxis',500); s = 10; p = 2; - [h,poly_coeff] = opt_poly_bisect(lam,s,p,'chebyshev') + [h,poly_coeff] = opt_poly_bisect(lam,s,p,'chebyshev') - - To reproduce figure 10 of :cite:`2012_optimal_stability_polynomials`):: + - To reproduce figure 10 of :cite:`2012_optimal_stability_polynomials` :: lam_func = @(kappa) spectrum('rectangle',100,kappa,10) [h,poly_coeff] = opt_poly_bisect(lam,20,1,'chebyshev','lam_func',lam_func) @@ -51,21 +83,3 @@ Examples: -spectrum -============================================= -:: - - function lamda = spectrum(name,N,kappa,beta) - - -Return N discretely sampled values from certain sets in the complex plane. - -Acceptable values for name: - * 'realaxis': `[-1,0]` - * 'imagaxis': `[-i,i]` - * 'disk': `{z : |z+1|=1}` - * 'rectangle': `{x+iy : -\beta \le y \le \beta, -\kappa \le x \le 0}` - * 'Niegemann-ellipse' and 'Niegemann-circle': See Niegemann 2011 - * 'gap': Spectrum with a gap; see Ketcheson & Ahmadia 2012 - -kappa and beta are used only if name == 'rectangle' diff --git a/doc/refs.bib b/doc/refs.bib index 91c4c72..903945c 100644 --- a/doc/refs.bib +++ b/doc/refs.bib @@ -1,12 +1,10 @@ -@article{2012_optimal_stability_polynomials, - Author = {Ketcheson, David I. and Ahmadia, Aron J.}, - Journal = {Communications in Applied Mathematics and Computational Science}, - Number = {2}, - Pages = {247--271}, - Title = {Optimal stability polynomials for numerical integration of initial value problems}, - Volume = {7}, - Year = {2012} -} +@article{vandegriend1986, + Author = {van de Griend, J A and Kraaijevanger, J. F. B. M.}, + Journal = {Numerische Mathematik}, + Pages = {413--424}, + Title = {{Absolute Monotonicity of Rational Functions Occurring in the Numerical Solution of Initial Value Problems}}, + Volume = {49}, + Year = {1986}} @article{2009_monotonicity, Author = {Ketcheson, David I.}, @@ -19,3 +17,24 @@ @article{2009_monotonicity Year = {2009}} +@article{niegemann2011, + Author = {Niegemann, Jens and Diehl, Richard and Busch, Kurt}, + Journal = {Journal of Computational Physics}, + Month = sep, + Number = {2}, + Pages = {372--364}, + Title = {{Efficient low-storage Runge--Kutta schemes with optimized stability regions}}, + Volume = {231}, + Year = {2011}} + +@article{2012_optimal_stability_polynomials, + Author = {Ketcheson, David I. and Ahmadia, Aron J.}, + Journal = {Communications in Applied Mathematics and Computational Science}, + Number = {2}, + Pages = {247--271}, + Title = {Optimal stability polynomials for numerical integration of initial value problems}, + Volume = {7}, + Year = {2012} +} + + diff --git a/dwrk-opt/README.md b/dwrk-opt/README.md new file mode 100644 index 0000000..a323fce --- /dev/null +++ b/dwrk-opt/README.md @@ -0,0 +1,3 @@ +This directory contains experimental research code for +investigating downwind-biased Runge-Kutta methods. +Use at your own risk! diff --git a/low-storage/README.md b/low-storage/README.md new file mode 100644 index 0000000..ee1b376 --- /dev/null +++ b/low-storage/README.md @@ -0,0 +1,3 @@ +This directory contains experimental research code related to +finding optimal low-storage Runge-Kutta methods. Use at your +own risk! diff --git a/polyopt/README.rst b/polyopt/README.rst index 0e22dda..0a03852 100644 --- a/polyopt/README.rst +++ b/polyopt/README.rst @@ -3,7 +3,7 @@ semi-discretization of a PDE), finds an optimal stability polynomial. The polynomial coefficients can then be used as input to `RK-coeff-opt` to find a corresponding Runge-Kutta method. -This is the implementation of the algorithm described in [ketcheson-ahmadia]_. +This is the implementation of the algorithm described in :cite:`2012_optimal_stability_polynomials`. The code was written by Aron Ahmadia and David Ketcheson. To run the tests, execute the MATLAB commands diff --git a/polyopt/opt_poly_bisect.m b/polyopt/opt_poly_bisect.m index 6850eca..4c162d2 100644 --- a/polyopt/opt_poly_bisect.m +++ b/polyopt/opt_poly_bisect.m @@ -27,7 +27,7 @@ % s = 10; p = 2; % [h,poly_coeff] = opt_poly_bisect(lam,s,p,'chebyshev') % -% - To reproduce figure 10 of [ketcheson-ahmadia]_ :: +% - To reproduce figure 10 of :cite:`2012_optimal_stability_polynomials` :: % % lam_func = @(kappa) spectrum('rectangle',100,kappa,10) % [h,poly_coeff] = opt_poly_bisect(lam,20,1,'chebyshev','lam_func',lam_func) @@ -359,4 +359,4 @@ minimize max(R) plot(real(lam),imag(lam),'o') hold off; status = 1; -end \ No newline at end of file +end diff --git a/polyopt/spectrum.m b/polyopt/spectrum.m index 6396a6c..13e9e16 100644 --- a/polyopt/spectrum.m +++ b/polyopt/spectrum.m @@ -8,8 +8,8 @@ % * 'imagaxis': `[-i,i]` % * 'disk': `{z : |z+1|=1}` % * 'rectangle': `{x+iy : -\beta \le y \le \beta, -\kappa \le x \le 0}` -% * 'Niegemann-ellipse' and 'Niegemann-circle': See Niegemann 2011 -% * 'gap': Spectrum with a gap; see Ketcheson & Ahmadia 2012 +% * 'Niegemann-ellipse' and 'Niegemann-circle': See :cite:`niegemann2011` +% * 'gap': Spectrum with a gap; see :cite:`2012_optimal_stability_polynomials` % % kappa and beta are used only if name == 'rectangle'