0% found this document useful (0 votes)
1K views1,108 pages

SAS IML User Guide PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views1,108 pages

SAS IML User Guide PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1108

SAS/IML 9.

3
Users Guide

SAS Documentation
The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2011. SAS/IML 9.3 Users Guide. Cary, NC:
SAS Institute Inc.

SAS/IML 9.3 Users Guide

Copyright 2011, SAS Institute Inc., Cary, NC, USA

ISBN 978-1-60764-913-7

All rights reserved. Produced in the United States of America.

For a hard-copy book: No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or
by any means, electronic, mechanical, photocopying, or otherwise, without the prior written permission of the publisher, SAS
Institute Inc.

For a Web download or e-book: Your use of this publication shall be governed by the terms established by the vendor at the time
you acquire this publication.

The scanning, uploading, and distribution of this book via the Internet or any other means without the permission of the publisher
is illegal and punishable by law. Please purchase only authorized electronic editions and do not participate in or encourage
electronic piracy of copyrighted materials. Your support of others rights is appreciated.

U.S. Government Restricted Rights Notice: Use, duplication, or disclosure of this software and related documentation by the
U.S. government is subject to the Agreement with SAS Institute and the restrictions set forth in FAR 52.227-19, Commercial
Computer Software-Restricted Rights (June 1987).

SAS Institute Inc., SAS Campus Drive, Cary, North Carolina 27513.

1st electronic book, July 2011


1st printing, July 2011
SAS Publishing provides a complete selection of books and electronic products to help customers use SAS software to its fullest
potential. For more information about our e-books, e-learning products, CDs, and hard-copy books, visit the SAS Publishing Web
site at support.sas.com/publishing or call 1-800-727-3228.

SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in
the USA and other countries. indicates USA registration.

Other brand and product names are registered trademarks or trademarks of their respective companies.
Contents
Chapter 1. Whats New in SAS/IML 9.3 . . . . . . . . . . . . . . . . . . . . . 1
Chapter 2. Introduction to SAS/IML Software . . . . . . . . . . . . . . . . . . . 9
Chapter 3. Understanding the SAS/IML Language . . . . . . . . . . . . . . . . . 15
Chapter 4. Tutorial: A Module for Linear Regression . . . . . . . . . . . . . . . . 29
Chapter 5. Working with Matrices . . . . . . . . . . . . . . . . . . . . . . . 41
Chapter 6. Programming Statements . . . . . . . . . . . . . . . . . . . . . . 65
Chapter 7. Working with SAS Data Sets . . . . . . . . . . . . . . . . . . . . . 85
Chapter 8. File Access . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Chapter 9. General Statistics Examples . . . . . . . . . . . . . . . . . . . . . 125
Chapter 10. Submitting SAS Statements . . . . . . . . . . . . . . . . . . . . . 179
Chapter 11. Calling Functions in the R Language . . . . . . . . . . . . . . . . . . 189
Chapter 12. Robust Regression Examples . . . . . . . . . . . . . . . . . . . . . 205
Chapter 13. Time Series Analysis and Examples . . . . . . . . . . . . . . . . . . 249
Chapter 14. Nonlinear Optimization Examples . . . . . . . . . . . . . . . . . . . 335
Chapter 15. Graphics Examples . . . . . . . . . . . . . . . . . . . . . . . . 411
Chapter 16. Window and Display Features . . . . . . . . . . . . . . . . . . . . 441
Chapter 17. Storage Features . . . . . . . . . . . . . . . . . . . . . . . . . 455
Chapter 18. Using SAS/IML Software to Generate SAS/IML Statements . . . . . . . . . 461
Chapter 19. Wavelet Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 477
Chapter 20. Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . 499
Chapter 21. Sparse Matrix Algorithms . . . . . . . . . . . . . . . . . . . . . . 529
Chapter 22. Further Notes . . . . . . . . . . . . . . . . . . . . . . . . . . 537
Chapter 23. Language Reference . . . . . . . . . . . . . . . . . . . . . . . . 543
Chapter 24. Module Library . . . . . . . . . . . . . . . . . . . . . . . . . . 1065

Subject Index 1083

Syntax Index 1093


iv
Chapter 1

Whats New in SAS/IML 9.3

Contents
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Calling SAS Procedures from PROC IML . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Calling R Functions from PROC IML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
New Functions and Subroutines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
ALLCOMB Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
ALLPERM Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
BIN Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
CORR Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
COV Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
COUNTN Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
COUNTMISS Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
COUNTUNIQUE Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
CUPROD Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
DIF Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
ELEMENT Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
FULL Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
LAG Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
MEAN Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
PROD Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
QNTL Call . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
RANCOMB Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
RANGE Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
RANPERM Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
SHAPECOL Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
SQRVECH Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
STD Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
SPARSE Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
TABULATE Call . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
VAR Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
VECH Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Changes to the IMLMLIB Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Documentation Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 F Chapter 1: Whats New in SAS/IML 9.3

Overview

SAS/IML 9.3 includes two new features that are related to calling other languages from within the IML
procedure:

 calling SAS procedures and DATA steps from PROC IML


 calling functions in the R statistical programming language from PROC IML

In addition, SAS/IML 9.3 provides several new functions and subroutines.

Calling SAS Procedures from PROC IML

SAS/IML 9.3 supports the SUBMIT and ENDSUBMIT statements. These statements delimit a block of
statements that are sent to another language for processing.
The SUBMIT and ENDSUBMIT statements enable you to call SAS procedures and DATA steps without
leaving the IML procedure. This feature has been very popular in SAS/IML Studio since it was introduced
in 2002. The feature is now available in PROC IML.
You can use SAS data sets to transfer data between SAS/IML matrices and SAS procedures. SAS procedures
require that data be in a SAS data set.

Calling R Functions from PROC IML

The SUBMIT and ENDSUBMIT statements also provide an interface to the R statistical programming
language, so that you can submit R statements from within your SAS/IML program. To submit statements
to R, specify the R option in the SUBMIT statement.
You can transfer data from SAS/IML matrices and SAS data sets into R matrices and R data frames, and
vice versa. Specifically, the following subroutines are available to transfer data from a SAS format into an
R format:

Table 1.1 Transferring from a SAS Source to an R Destination


Subroutine SAS Source R Destination
ExportDataSetToR SAS data set R data frame
ExportMatrixToR SAS/IML matrix R matrix

In addition, the following subroutines are available to transfer data from an R format into a SAS format:
New Functions and Subroutines F 3

Table 1.2 Transferring from an R Source to a SAS Destination


Subroutine R Source SAS Destination
ImportDataSetFromR R expression SAS data set
ImportMatrixFromR R expression SAS/IML matrix

In Table 1.2, an R expression can be the name of a data frame, the name of a matrix, or an expression that
results in either of these data structures.

New Functions and Subroutines

ALLCOMB Function

The ALLCOMB function generates all combinations of n elements taken k at a time.

ALLPERM Function

The ALLPERM function generates all permutations of n elements

BIN Function

The BIN function divides numeric values into a set of disjoint intervals called bins. The BIN function
indicates which elements are contained in each bin.

CORR Function

The CORR function computes a sample correlation matrix for data. The function supports Pearsons
product-moment correlations, Hoeffdings D statistics, Kendalls tau-b coefficients, and Spearmans cor-
relation coefficients based on the ranks of the variables. The function supports two different methods for
dealing with missing values in the data.
4 F Chapter 1: Whats New in SAS/IML 9.3

COV Function

The COV function computes a sample variance-covariance matrix for data. The function supports two
different methods for dealing with missing values in the data.

COUNTN Function

The COUNTN function counts the number of nonmissing values in a matrix.

COUNTMISS Function

The COUNTMISS function counts the number of missing values in a matrix.

COUNTUNIQUE Function

The COUNTUNIQUE function counts the number of unique values in a matrix.

CUPROD Function

The CUPROD function computes the cumulative product of elements in a matrix.

DIF Function

The DIF function computes the differences between data values and one or more lagged (shifted) values for
time series data.

ELEMENT Function

The ELEMENT function returns a matrix that indicates which elements of one matrix are also elements of
a second matrix.
FULL Function F 5

FULL Function

The FULL function converts a matrix stored in a sparse format into a matrix stored in a dense format. See
the SPARSE function for a description of how sparse matrices are stored.

LAG Function

The LAG function computes one or more lagged (shifted) values for time series data.

MEAN Function

The MEAN function computes a sample mean of data. The function can compute arithmetic means, trimmed
means, and Winsorized means.

PROD Function

The PROD function computes the product of elements in one or more matrices.

QNTL Call

The QNTL subroutine computes sample quantiles for data.

RANCOMB Function

The RANCOMB function returns random combinations of n elements taken k at a time.

RANGE Function

The RANGE function returns the range of values for a set of matrices.
6 F Chapter 1: Whats New in SAS/IML 9.3

RANPERM Function

The RANPERM function returns random permutations of n elements.

SHAPECOL Function

The SHAPECOL function reshapes and repeats values by columns.

SQRVECH Function

The SQRVECH function converts a symmetric matrix which is stored columnwise to a square matrix.

STD Function

The STD function computes a sample standard deviation for each column of a data matrix.

SPARSE Function

The SPARSE function converts a matrix that contains many zeros into a matrix stored in a sparse format
which suitable for use with the ITSOLVER subroutine or the SOLVELIN subroutine.

TABULATE Call

The TABULATE subroutine counts the number of elements in each of the unique categories of the argument.

VAR Function

The VAR function computes a sample variance for each column of a data matrix.
VECH Function F 7

VECH Function

The VECH function creates a vector from the columns of the lower triangular elements of a matrix.

Changes to the IMLMLIB Library

The CORR module has been removed from the IMLMLIB library. In its place is the built-in CORR function.
The MEDIAN, QUARTILE, and STANDARD modules now support missing values in the data argument.

Documentation Enhancements

The first six chapters of the SAS/IML Users Guide have been completely rewritten in order to provide new
users with a gentle introduction to the SAS/IML language. Two new chapters have been written:

 Chapter 10, Submitting SAS Statements, describes how to call SAS procedures from within PROC
IML.

 Chapter 11, Calling Functions in the R Language, describes how to call R functions from within
PROC IML.
8
Chapter 2

Introduction to SAS/IML Software

Contents
Overview of SAS/IML Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Highlights of SAS/IML Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
An Introductory SAS/IML Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
PROC IML Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Conventions Used in This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Typographical Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Output of Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Overview of SAS/IML Software

SAS/IML software gives you access to a powerful and flexible programming language in a dynamic, inter-
active environment. The acronym IML stands for interactive matrix language.
The fundamental object of the language is a data matrix. You can use SAS/IML software interactively (at
the statement level) to see results immediately, or you can submit blocks of statements or an entire program.
You can also encapsulate a series of statements by defining a module; you can call the module later to
execute all of the statements in the module.
SAS/IML software is powerful. SAS/IML software enables you to concentrate on solving problems because
necessary (but distracting) activities such as memory allocation and dimensioning of matrices are performed
automatically. You can use built-in operators and call routines to perform complex tasks in numerical linear
algebra such as matrix inversion or the computation of eigenvalues. You can define your own functions and
subroutines by using SAS/IML modules. You can perform operations on a single value or take advantage of
matrix operators to perform operations on an entire data matrix. For example, the following statement adds
1 to every element of the matrix x, regardless of the dimensions of x:

x = x+1;

The SAS/IML language contains statements that enable you to manage data. You can read, create, and
update SAS data sets in SAS/IML software without using the DATA step. For example, the following
statement reads a SAS data set to obtain phone numbers for all individuals whose last name begins with
Smith:

read all var{phone} where(lastname=:"Smith");

The result is phone, a vector of phone numbers.


10 F Chapter 2: Introduction to SAS/IML Software

Highlights of SAS/IML Software

SAS/IML provides a high-level programming language.

You can program easily and efficiently with the many features for arithmetic and character expressions in
SAS/IML software. You can access a wide variety of built-in functions and subroutines designed to make
your programming fast, easy, and efficient. Because SAS/IML software is part of the SAS System, you can
access SAS data sets or external files with an extensive set of data processing commands for data input and
output, and you can edit existing SAS data sets or create new ones.
SAS/IML software has a complete set of control statements, such as DO/END, START/FINISH, iterative
DO, IF-THEN/ELSE, GOTO, LINK, PAUSE, and STOP, giving you all of the commands necessary for
execution control and program modularization. See the section Control Statements on page 20 for details.

SAS/IML software operates on matrices.

Functions and statements in most programming languages manipulate and compare a single data element.
However, the fundamental data element in SAS/IML software is the matrix, a two-dimensional (row 
column) array of numeric or character values.

SAS/IML software possesses a powerful vocabulary of operators.

You can access built-in matrix operations that require calls to math-library subroutines in other languages.
You can access many matrix operators, functions, and subroutines.

SAS/IML software uses operators that apply to entire matrices.

You can add elements of the matrices A and B with the expression A+B. You can perform matrix multiplica-
tion with the expression A*B and perform elementwise multiplication with the expression A#B.

SAS/IML software is interactive.

You can execute SAS/IML statements one at a time and see the results immediately, or you can submit
blocks of statements or an entire program. You can also define a module that encapsulates a series of
statements. You can interact with an executing module by using the PAUSE statement, which enables you
to enter additional statements before continuing execution.

SAS/IML software is dynamic.

You do not need to declare, dimension, or allocate storage for a data matrix. SAS/IML software does this
automatically. You can change the dimension or type of a matrix at any time. You can open multiple files
or access many libraries. You can reset options or replace modules at any time.
An Introductory SAS/IML Program F 11

SAS/IML software processes data.

You can read observations from a SAS data set. You can create either multiple vectors (one for each variable
in the data set) or a single matrix that contains a column for each data set variable. You can create a new
SAS data set, or you can edit or append observations to an existing SAS data set.

An Introductory SAS/IML Program

This section presents a simple introductory SAS/IML program that implements a numerical algorithm that
estimates the square root of a number, accurate to three decimal places. The following statements define a
function module named MySqrt that performs the calculations:

proc iml; /* begin IML session */

start MySqrt(x); /* begin module */


y = 1; /* initialize y */
do until(w<1e-3); /* begin DO loop */
z = y; /* set z=y */
y = 0.5#(z+x/z); /* estimate square root */
w = abs(y-z); /* compute change in estimate */
end; /* end DO loop */
return(y); /* return approximation */
finish; /* end module */

You can call the MySqrt module to estimate the square root of several numbers given in a matrix literal
(enclosed in braces) and print the results:

t = MySqrt({3,4,7,9}); /* call function MySqrt */


s = sqrt({3,4,7,9}); /* compare with true values */
diff = t - s; /* compute differences */
print t s diff; /* print matrices */

Figure 2.1 Approximate Square Roots

t s diff

1.7320508 1.7320508 0
2 2 2.22E-15
2.6457513 2.6457513 4.678E-11
3 3 1.397E-9
12 F Chapter 2: Introduction to SAS/IML Software

PROC IML Statement


PROC IML < SYMSIZE=n1 > < WORKSIZE=(n2) > ;
< SAS/IML language statements > ;
QUIT ;

You can specify the following options in the PROC IML statement:

SYMSIZE=n1
specifies the size of memory, in kilobytes, that is allocated to the PROC IML symbol space.

WORKSIZE=n2
specifies the size of memory, in kilobytes, that is allocated to the PROC IML workspace.

If you do not specify any options, PROC IML uses host-dependent defaults. In general, you do not need to
be concerned with the details of memory usage because memory allocation is done automatically. However,
see the section Memory and Workspace on page 537 for special situations.

Conventions Used in This Book

Typographical Conventions

This book uses several type styles for presenting information. The following list explains the meaning of
the typographical conventions used in this book:

text is the standard type style used for most text.


FUNCTION is used for the name of SAS/IML functions, subroutines, and statements when
they appear in the text. This convention is also used for SAS statements and
options. However, you can enter these elements in your own SAS programs in
lowercase, uppercase, or a mixture of the two.
SYNTAX is used in the Syntax sections initial lists of SAS statements and options.
argument is used for option values that must be supplied by the user in the syntax defini-
tions.
VariableName is used for the names of variables and data sets when they appear in the text.
LibName is used for the names of SAS librefs (such as Sasuser) when they appear in the
text.
bold is used to refer to mathematical matrices and vectors such as in the equation
y D Ax.
Output of Examples F 13

Code is used to refer to SAS/IML matrices, vectors, and expressions in the SAS/IML
language such as the expression y = A*x. This convention is also used for ex-
ample code. In most cases, this book uses lowercase type for SAS/IML state-
ments.
italic is used for terms that are defined in the text, for emphasis, and for references to
publications.

Output of Examples

This documentation contains many short examples that illustrate how to use the SAS/IML language. Many
examples end with a PRINT statement; the output for these examples appears immediately after the program
statements.
14
Chapter 3

Understanding the SAS/IML Language

Contents
Defining a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Matrix Names and Literals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Matrix Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Matrix Literals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Creating Matrices from Matrix Literals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Scalar Literals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Numeric Literals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Character Literals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Repetition Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Reassigning Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Assignment Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Types of Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Control Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
CALL Statements and Subroutines . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Command Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Missing Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

Defining a Matrix

A matrix is the fundamental structure in the SAS/IML language. A matrix is a two-dimensional array of
numeric or character values. Matrices are useful for working with data and have the following properties:

 Matrices can be either numeric or character. Elements of a numeric matrix are double-precision
values. Elements of a character matrix are character strings of equal length.

 The name of a matrix must be a valid SAS name.

 Matrices have dimensions defined by the number of rows and columns.

 Matrices can contain elements that have missing values (see the section Missing Values on page 26).
16 F Chapter 3: Understanding the SAS/IML Language

The dimensions of a matrix are defined by the number of rows and columns. An n  p matrix has np
elements arranged in n rows and p columns. The following nomenclature is standard in this book:

 1  1 matrices are called scalars.

 1  p matrices are called row vectors.

 n  1 matrices are called column vectors.

 The type of a matrix is numeric if its elements are numbers; the type is character if its elements
are character strings. A matrix that has not been assigned values has an undefined type.

Matrix Names and Literals

Matrix Names

The name of a matrix must be a valid SAS name: a character string that contains between 1 and 32 charac-
ters, begins with a letter or underscore, and contains only letters, numbers, and underscores. You associate
a name with a matrix when you create or define the matrix. A matrix name exists independently of values.
This means that you can change the values associated with a particular matrix name, change the dimension
of the matrix, or even change its type (numeric or character).

Matrix Literals

A matrix literal is an enumeration of the values of a matrix. For example, {1,2,3} is a numeric matrix with
three elements. A matrix literal can have a single element (a scalar), or it can be an array of many elements.
The matrix can be numeric or character. The dimensions of the matrix are automatically determined by the
way you punctuate the values.
Use curly braces ({ }) to enclose the values of a matrix. Within the braces, values must be either all numeric
or all character. Use commas to separate the rows. If you specify multiple rows, all rows must have the
same number of elements.
You can specify any of the following types of elements:

 a number. You can specify numbers with or without decimal points, and in standard or scientific
notation. For example, 5, 3.14, or 1E-5.

 a period (.), which represents a missing numeric value.

 a number in brackets ([ ]), which represents a repetition factor.


Creating Matrices from Matrix Literals F 17

 a character string. Character strings can be enclosed in single quotes (') or double quotes ("), but they
do not need to have quotes. Quotes are required when there are no enclosing braces or when you want
to preserve case, special characters, or blanks in the string. Special characters include the following:
?, =, *, :, (, ), {, and }.
If the string has embedded quotes, you must double them, as shown in the following statements:

w1 = "I said, ""Don't fall!""";


w2 = 'I said, "Don''t fall!"';

Creating Matrices from Matrix Literals

You can create a matrix by using matrix literals: simply list the element values inside of curly braces. You
can also create a matrix by calling a function, a subroutine, or an assignment statement. The following
sections present some simple examples of matrix literals. For more information about matrix literals, see
Chapter 5, Working with Matrices.

Scalar Literals

The following example statements define scalars as literals. These examples are simple assignment state-
ments with a matrix name on the left-hand side of the equal sign and a value on the right-hand side. Notice
that you do not need to use braces when there is only one element.

a = 12;
a = . ;
a = 'hi there';
a = "Hello";

Numeric Literals

To specify a matrix literal with multiple elements, enclose the elements in braces. Use commas to separate
the rows of a matrix. For example, the following statements assign and print matrices of various dimensions:

x = {1 . 3 4 5 6}; /* 1 x 6 row vector */


y = {1,2,3,4}; /* 4 x 1 column vector */
z = 3#y; /* 3 times the vector y */
w = {1 2, 3 4, 5 6}; /* 3 x 2 matrix */
print x, y z w;
18 F Chapter 3: Understanding the SAS/IML Language

Figure 3.1 Matrices Created from Numeric Literals

1 . 3 4 5 6

y z w

1 3 1 2
2 6 3 4
3 9 5 6
4 12

Character Literals

You can define a character matrix literal by specifying character strings between braces. If you do not place
quotes around the strings, all characters are converted to uppercase. You can use either single or double
quotes to preserve case and to specify strings that contain blanks or special characters. For character matrix
literals, the length of the elements is determined by the longest element. Shorter strings are padded on the
right with blanks. For example, the following statements define and print two 1  2 character matrices with
string length 4 (the length of the longer string):

a = { abc defg}; /* no quotes; uppercase */


b = {'abc' 'DEFG'}; /* quotes; case preserved */
print a, b;

Figure 3.2 Matrices Created from Character Literals

ABC DEFG

abc DEFG

Repetition Factors

A repetition factor can be placed in brackets before a literal element to have the element repeated. For
example, the following two statements are equivalent:

answer = {[2] 'Yes', [2] 'No'};


answer = {'Yes' 'Yes', 'No' 'No'};
Reassigning Values F 19

Reassigning Values

You can assign new values to a matrix at any time. The following statements create a 2  3 numeric matrix
named a, then redefine a to be a 1  3 character matrix:

a = {1 2 3, 6 5 4};
a = {'Sales' 'Marketing' 'Administration'};

Assignment Statements

Assignment statements create matrices by evaluating expressions and assigning the results. The expressions
can be composed of operators (for example, matrix multiplication) or functions that operate on matrices (for
example, matrix inversion). The resulting matrices automatically acquire appropriate characteristics and
values. Assignment statements have the general form result = expression where result is the name of the
new matrix and expression is an expression that is evaluated.

Functions as Expressions

You can create matrices as a result of a function call. Scalar functions such as LOG or SQRT operate on
each element of a matrix, whereas matrix functions such as INV or RANK operate on the entire matrix. The
following statements are examples of function calls:

a = sqrt(b); /* elementwise square root */


y = inv(x); /* matrix inversion */
r = rank(x); /* ranks (order) of elements */

The SQRT function assigns each element of a the square root of the corresponding element of b. The INV
function computes the inverse matrix of x and assigns the results to y. The RANK function creates a matrix
r with elements that are the ranks of the corresponding elements of x.

Operators within Expressions

Three types of operators can be used in assignment statement expressions. The matrices on which an
operator acts must have types and dimensions that are conformable to the operation. For example, matrix
multiplication requires that the number of columns of the left-hand matrix be equal to the number of rows
of the right-hand matrix.
The three types of operators are as follows:

 Prefix operators are placed in front of an operand (-A).

 Binary operators are placed between operands (A*B).

 Postfix operators are placed after an operand (A0 ).


20 F Chapter 3: Understanding the SAS/IML Language

All operators can work on scalars, vectors, or matrices, provided that the operation makes sense. For
example, you can add a scalar to a matrix or divide a matrix by a scalar. The following statement is an
example of using operators in an assignment statement:

y = x#(x>0);

This assignment statement creates a matrix y in which each negative element of the matrix x is replaced
with zero. The statement actually contains two expressions that are evaluated. The expression x>0 is an
operation that compares each element of x to zero and creates a temporary matrix of results; an element of
the temporary matrix is 1 when the corresponding element of x is positive, and 0 otherwise. The original
matrix x is then multiplied elementwise by the temporary matrix, resulting in the matrix y.
See Chapter 23, Language Reference, for a complete listing and explanation of operators.

Types of Statements

Statements in the SAS/IML language can be classified into three general categories:

Control statements
direct the flow of execution. For example, the IF-THEN/ELSE statement conditionally controls state-
ment execution.

Functions and CALL statements


perform special tasks or user-defined operations. For example, the statement CALL EIGEN computes
eigenvalues and eigenvectors.

Command statements
perform special processing, such as setting options, displaying windows, and handling input and
output. For example, the MATTRIB statement associates matrix characteristics with matrix names.

Control Statements

The SAS/IML language has statements that control program execution. You can use control statements to
direct the execution of your program and to define DO groups and modules. Some control statements are
shown in the following table:
Functions F 21

Table 3.1 Control Statements


Statement Description
DO, END Specifies a group of statements
Iterative DO, END Defines an iteration loop
GOTO, LINK Specifies the next program statement to be executed
IF-THEN/ELSE Conditionally routes execution
PAUSE Instructs a module to pause during execution
QUIT Exits from the IML procedure
RESUME Instructs a module to resume execution
RETURN Returns from a LINK statement or module
RUN Executes a module
START, FINISH Defines a module
STOP, ABORT Stops the execution of an IML program

See Chapter 6, Programming Statements, for more information about control statements.

Functions

The general form of a function is result = FUNCTION(arguments) where arguments is a list of matrix
names, matrix literals, or expressions. Functions always return a single matrix, whereas subroutines can
return multiple matrices or no matrices at all. If a function returns a character matrix, the matrix to hold the
result is allocated with a string length equal to the longest element, and all shorter elements are padded on
the right with blanks.

Categories of Functions

Many functions fall into one of the following general categories:

scalar functions
operate on each element of the matrix argument. For example, the ABS function returns a matrix with
elements that are the absolute values of the corresponding elements of the argument matrix.

matrix inquiry functions


return information about a matrix. For example, the ANY function returns a value of 1 if any of the
elements of the argument matrix are nonzero.

summary functions
return summary statistics based on all elements of the matrix argument. For example, the SSQ func-
tion returns the sum of squares of all elements of the argument matrix.

matrix reshaping functions


manipulate the matrix argument and returns a reshaped matrix. For example, the DIAG function
returns a diagonal matrix with values and dimensions that are determined by the argument matrix.
22 F Chapter 3: Understanding the SAS/IML Language

linear algebraic functions


perform matrix algebraic operations on the argument. For example, the TRACE function returns the
trace of the argument matrix.

statistical functions
perform statistical operations on the matrix argument. For example, the RANK function returns a
matrix that contains the ranks of the argument matrix.

The SAS/IML language also provides functions in the following general categories:

 matrix sorting and BY-group processing

 numerical linear algebra

 optimization

 random number generation

 time series analysis

 wavelet analysis

See the section Statements, Functions, and Subroutines by Category on page 551 for a complete listing of
SAS/IML functions.

Exceptions to the SAS DATA Step

The SAS/IML language supports most functions that are supported in the SAS DATA step. These func-
tions almost always accept matrix arguments and usually act elementwise so that the result has the same
dimension as the argument. See the section Base SAS Functions Accessible from SAS/IML Software on
page 1044 for a list of these functions and also a small list of functions that are not supported by SAS/IML
software or that behave differently than their Base SAS counterparts.
The SAS/IML random number functions UNIFORM and NORMAL are built-in functions that produce the
same streams as the RANUNI and RANNOR functions, respectively, of the DATA step. For example, you
can use the following statement to create a 10  1 vector of random numbers:

x = uniform(repeat(0,10,1));

SAS/IML software does not support the OF clause of the SAS DATA step. For example, the following
statement cannot be interpreted in SAS/IML software:

a = mean(of x1-x10); /* invalid in the SAS/IML language */

The term x1-x10 would be interpreted as subtraction of the two matrix arguments rather than its DATA step
meaning, the variables X1 through X10.
CALL Statements and Subroutines F 23

CALL Statements and Subroutines

Subroutines (also called CALL statements) perform calculations, operations, or interact with the SAS
sytem. CALL statements are often used in place of functions when the operation returns multiple results or,
in some cases, no result. The general form of the CALL statement is
CALL SUBROUTINE (arguments) ;

where arguments can be a list of matrix names, matrix literals, or expressions. If you specify several
arguments, use commas to separate them. When using output arguments that are computed by a subroutine,
always use variable names instead of expressions or literals.

Creating Matrices with CALL Statements

Matrices are created whenever a CALL statement returns one or more result matrices. For example, the fol-
lowing statement returns two matrices (vectors), val and vec, that contain the eigenvalues and eigenvectors,
respectively, of the matrix A:

call eigen(val,vec,A);

You can program your own subroutine by using the START and FINISH statements to define a module. You
can then execute the module with a CALL or RUN statement. For example, the following statements define
a module named MyMod which returns matrices that contain the square root and log of each element of the
argument matrix:

start MyMod(a,b,c);
a=sqrt(c);
b=log(c);
finish;
run MyMod(S,L,{1 2 4 9});

Execution of the module statements creates matrices S and L which contain the square roots and natural
logs, respectively, of the elements of the third argument.

Interacting with the SAS System

You can use CALL statements to manage SAS data sets or to access the PROC IML graphics system. For
example, the following statement deletes the SAS data set named MyData:

call delete(MyData);

The following statements activate the graphics system and produce a crude scatter plot:

x = 0:100;
y = 50 + 50*sin(6.28*x/100);
call gstart; /* activate the graphics system */
call gopen; /* open a new graphics segment */
call gpoint(x,y); /* plot the points */
call gshow; /* display the graph */
call gclose; /* close the graphics segment */
24 F Chapter 3: Understanding the SAS/IML Language

SAS/IML Studio, which is distributed as part of SAS/IML software, contains graphics that are easier to
use and more powerful than the older GSTART/GCLOSE graphics in PROC IML. See the SAS/IML Studio
Users Guide for a description of the graphs in SAS/IML Studio.

Command Statements

Command statements are used to perform specific system actions, such as storing and loading matrices and
modules, or to perform special data processing requests. The following table lists some commands and the
actions they perform.

Table 3.2 Command Statements


Statement Description
FREE Frees memory associated with a matrix
LOAD Loads a matrix or module from a storage library
MATTRIB Associates printing attributes with matrices
PRINT Prints a matrix or message
RESET Sets various system options
REMOVE Removes a matrix or module from library storage
SHOW Displays system information
STORE Stores a matrix or module in the storage library

These commands play an important role in SAS/IML software. You can use them to control information
displayed about matrices, symbols, or modules.
If a certain computation requires almost all of the memory on your computer, you can use commands to
store extraneous matrices in the storage library, free the matrices of their values, and reload them later when
you need them again. For example, the following statements define several matrices:

proc iml;
a = {1 2 3, 4 5 6, 7 8 9};
b = {2 2 2};
show names;

Figure 3.3 List of Symbols in RAM

SYMBOL ROWS COLS TYPE SIZE


------ ------ ------ ---- ------
a 3 3 num 8
b 1 3 num 8
Number of symbols = 2 (includes those without values)

Suppose that you want to compute a quantity that does not involve the a matrix or the b matrix. You can
store a and b in a library storage with the STORE command, and release the space with the FREE command.
Command Statements F 25

To list the matrices and modules in library storage, use the SHOW STORAGE command (or the STORAGE
function), as shown in the following statements:

store a b; /* store the matrices */


show storage; /* make sure the matrices are saved */
free a b; /* free the RAM */

The output from the SHOW STORAGE statement (see Figure 3.4) indicates that there are two matrices in
storage. (There are no modules in storage for this example.)

Figure 3.4 List of Symbols in Storage

Contents of storage library = WORK.IMLSTOR

Matrices:
A B

Modules:

You can load these matrices from the storage library into RAM with the LOAD command, as shown in the
following statement:

load a b;

See Chapter 17, Storage Features, for more details about storing modules and matrices.

Data Management Commands

SAS/IML software has many commands that enable you to manage your SAS data sets from within the
SAS/IML environment. These data management commands operate on SAS data sets. There are also
commands for accessing external files. The following table lists some commands and the actions they
perform.
26 F Chapter 3: Understanding the SAS/IML Language

Table 3.3 Data Management Statements


Statement Description
APPEND Adds records to an output SAS data set
CLOSE Closes a SAS data set
CREATE Creates a new SAS data set
DELETE Deletes records in an output SAS data set
EDIT Reads from or writes to an existing SAS data set
FIND Finds records that satisfy some condition
LIST Lists records
PURGE Purges records marked for deletion
READ Reads records from a SAS data set into IML matrices
SETIN Sets a SAS data set to be the input data set
SETOUT Sets a SAS data set to be the output data set
SORT Sorts a SAS data set
USE Opens an existing SAS data set for reading

These commands can be used to perform data management. For example, you can read observations from
a SAS data set into a target matrix with the USE or EDIT command. You can edit a SAS data set and
append or delete records. If you have a matrix of values, you can output the values to a SAS data set with
the APPEND command. See Chapter 7, Working with SAS Data Sets, and Chapter 8, File Access, for
more information about these commands.

Missing Values

With SAS/IML software, a numeric element can have a special value called a missing value, which indicates
that the value is unknown or unspecified. Such missing values are coded, for logical comparison purposes,
in the bit pattern of very large negative numbers. A numeric matrix can have any mixture of missing
and nonmissing values. A matrix with missing values should not be confused with an empty or unvalued
matrixthat is, a matrix with zero rows and zero columns.
In matrix literals, a numeric missing value is specified as a single period (.). In data processing operations
that involve a SAS data set, you can append or delete missing values. All operations that move values also
move missing values.
However, for efficiency reasons, SAS/IML software does not support missing values in most matrix oper-
ations and functions. For example, matrix multiplication of a matrix with missing values is not supported.
Furthermore, many linear algebraic operations are not mathematically defined for a matrix with missing
values. For example, the inverse of a matrix with missing values is meaningless.
See Chapter 5, Working with Matrices, and Chapter 22, Further Notes, for more details about missing
values.
Summary F 27

Summary

This chapter introduced the fundamentals of the SAS/IML language, including the basic data element, the
matrix. You learned several ways to create matrices: assignment statements, matrix literals, and CALL
statements that return matrix results.
The chapter also introduced various types of programming statements: commands, control statements, iter-
ative statements, module definitions, functions, and subroutines.
Chapter 4, Tutorial: A Module for Linear Regression, offers an introductory tutorial that demonstrates
how to use SAS/IML software for statistical computations.
28
Chapter 4

Tutorial: A Module for Linear Regression

Contents
Overview of Linear Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Example: Solving a System of Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . 30
A Module for Linear Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Orthogonal Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Plotting Regression Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Low-Resolution Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
SAS/IML Studio Graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

Overview of Linear Regression

You can use SAS/IML software to solve mathematical problems or implement new statistical techniques and
algorithms. Formulas and matrix equations are easily translated in the SAS/IML language. For example, if
X is a data matrix and Y is a vector of observed responses, then you might be interested in the solution, b,
to the matrix equation Xb D Y . In statistics, the data matrices that arise often have more rows than columns
and so an exact solution to the linear system is impossible to find. Instead, the statistician often solves a
related equation: X 0 Xb D X 0 Y . The following mathematical formula expresses the solution vector in
terms of the data matrix and the observed responses:

b D .X 0 X / 1
X 0Y

This mathematical formula can be translated into the following SAS/IML statement:

b = inv(X`*X) * X`*Y; /* least squares estimates */

This assignment statement uses a built-in function (INV) and matrix operators (transpose and matrix multi-
plication). It is mathematically equivalent to (but less efficient than) the following alternative statement:

b = solve(X`*X, X`*Y); /* more efficient computation */

If a statistical method has not been implemented directly in a SAS procedure, you might be able to program
it by using the SAS/IML language. The most commonly used mathematical and matrix operations are built
directly into the language, so programs that require many statements in other languages require only a few
SAS/IML statements.
30 F Chapter 4: Tutorial: A Module for Linear Regression

Example: Solving a System of Linear Equations

Because the syntax of the SAS/IML language is similar to the notation used in linear algebra, it is of-
ten possible to directly translate mathematical methods from matrix-algebraic expressions into executable
SAS/IML statements. For example, consider the problem of solving three simultaneous equations:

3x1 x2 C 2x3 D 8
2x1 2x2 C 3x3 D 2
4x1 C x2 4x3 D 9

These equations can be written in matrix form as


2 32 3 2 3
3 1 2 x1 8
4 2 2 3 5 4 x2 5 D 4 2 5
4 1 4 x3 9

and can be expressed symbolically as

Ax D c

where A is the matrix of coefficients for the linear system. Because A is nonsingular, the system has a
solution given by
1
xDA c

This example solves this linear system of equations.

1 Define the matrices A and c. Both of these matrices are input as matrix literals; that is, you type the row
and column values as discussed in Chapter 3, Understanding the SAS/IML Language.

proc iml;
a = {3 -1 2,
2 -2 3,
4 1 -4};
c = {8, 2, 9};

2 Solve the equation by using the built-in INV function and the matrix multiplication operator. The INV
function returns the inverse of a square matrix and * is the operator for matrix multiplication. Conse-
quently, the solution is computed as follows:

x = inv(a) * c;
print x;
A Module for Linear Regression F 31

Figure 4.1 The Solution of a Linear System of Equations


x

3
5
2

3 Equivalently, you can solve the linear system by using the more efficient SOLVE function, as shown in
the following statement:

x = solve(a, c);

After SAS/IML executes the statements, the rows of the vector x contain the x1 ; x2 , and x3 values that solve
the linear system.
You can end PROC IML by using the QUIT statement:

quit;

A Module for Linear Regression

The linear systems that arise naturally in statistics are usually overconstrained, meaning that the X matrix
has more rows than columns and that an exact solution to the linear system is impossible to find. Instead,
the statistician assumes a linear model of the form

y D Xb C e

where y is the vector of responses, X is a design matrix, and b is a vector of unknown parameters that are
estimated by minimizing the sum of squares of e, the error or residual term.
The following example illustrates some programming techniques by using SAS/IML statements to perform
linear regression. (The example module does not replace regression procedures such as the REG procedure,
which are more efficient for regressions and offer a multitude of diagnostic options.)
Suppose you have response data y measured at five values of the independent variable X and you want to
perform a quadratic regression. In this case, you can define the design matrix X and the data vector y as
follows:

proc iml;
x = {1 1 1,
1 2 4,
1 3 9,
1 4 16,
1 5 25};
y = {1, 5, 9, 23, 36};

You can compute the least squares estimate of b by using the following statement:
32 F Chapter 4: Tutorial: A Module for Linear Regression

b = inv(x`*x) * x`*y;
print b;

Figure 4.2 Parameter Estimates

2.4
-3.2
2

The predicted values are found by multiplying the data matrix and the parameter estimates; the residuals are
the differences between actual and predicted responses, as shown in the following statements:

yhat = x*b;
r = y-yhat;
print yhat r;

Figure 4.3 Predicted and Residual Values

yhat r

1.2 -0.2
4 1
10.8 -1.8
21.6 1.4
36.4 -0.4

To estimate the variance of the responses, calculate the sum of squared errors (SSE), the error degrees of
freedom (DFE), and the mean squared error (MSE) as follows:

sse = ssq(r);
dfe = nrow(x)-ncol(x);
mse = sse/dfe;
print sse dfe mse;

Figure 4.4 Statistics for a Linear Model

sse dfe mse

6.4 2 3.2

Notice that in computing the degrees of freedom, you use the function NCOL to return the number of
columns of X and the function NROW to return the number of rows.
Now suppose you want to solve the problem repeatedly on new data. To do this, you can define a module.
Modules begin with a START statement and end with a FINISH statement, with the program statements in
between. The following statements define a module named Regress to perform linear regression:
A Module for Linear Regression F 33

start Regress; /* begins module */


xpxi = inv(x`*x); /* inverse of X'X */
beta = xpxi * (x`*y); /* parameter estimate */
yhat = x*beta; /* predicted values */
resid = y-yhat; /* residuals */

sse = ssq(resid); /* SSE */


n = nrow(x); /* sample size */
dfe = nrow(x)-ncol(x); /* error DF */
mse = sse/dfe; /* MSE */
cssy = ssq(y-sum(y)/n); /* corrected total SS */
rsquare = (cssy-sse)/cssy; /* RSQUARE */
print ,"Regression Results", sse dfe mse rsquare;

stdb = sqrt(vecdiag(xpxi)*mse); /* std of estimates */


t = beta/stdb; /* parameter t tests */
prob = 1-probf(t#t,1,dfe); /* p-values */
print ,"Parameter Estimates",, beta stdb t prob;
print ,y yhat resid;
finish Regress; /* ends module */

Assuming that the matrices x and y are defined, you can run the Regress module as follows:

run Regress; /* executes module */

Figure 4.5 The Results of a Regression Module

Regression Results

sse dfe mse rsquare

6.4 2 3.2 0.9923518

Parameter Estimates

beta stdb t prob

2.4 3.8366652 0.6255432 0.5954801


-3.2 2.923794 -1.094468 0.387969
2 0.4780914 4.1833001 0.0526691

y yhat resid

1 1.2 -0.2
5 4 1
9 10.8 -1.8
23 21.6 1.4
36 36.4 -0.4
34 F Chapter 4: Tutorial: A Module for Linear Regression

Orthogonal Regression

In the previous section, you ran a module that computes parameter estimates and statistics for a linear
regression model. All of the matrices used in the Regress module are global variables because the Regress
module does not have any arguments. Consequently, you can use those matrices in additional calculations.
Suppose you want to correlate the parameter estimates. To do this, you can calculate the covariance of the
estimates, then scale the covariance into a correlation matrix with values of 1 on the diagonal. You can
perform these operations by using the following statements:

reset print; /* turns on auto printing */


covb = xpxi*mse; /* covariance of estimates */
s = 1/sqrt(vecdiag(covb)); /* standard errors */
corrb = diag(s)*covb*diag(s); /* correlation of estimates */

The RESET PRINT statement causes the IML procedure to print the result of each assignment statement, as
shown in Figure 4.6. The covariance matrix of the estimates is contained in the covb matrix. The vector s
contains the standard errors of the parameter estimates and is used to compute the correlation matrix of the
estimates (corrb). These statistics are shown in Figure 4.6.

Figure 4.6 Covariance and Correlation Matrices for Estimates

Regression Results

sse dfe mse rsquare

6.4 2 3.2 0.9923518

Parameter Estimates

beta stdb t prob

2.4 3.8366652 0.6255432 0.5954801


-3.2 2.923794 -1.094468 0.387969
2 0.4780914 4.1833001 0.0526691

y yhat resid

1 1.2 -0.2
5 4 1
9 10.8 -1.8
23 21.6 1.4
36 36.4 -0.4

covb 3 rows 3 cols (numeric)

14.72 -10.56 1.6


-10.56 8.5485714 -1.371429
1.6 -1.371429 0.2285714

s 3 rows 1 col (numeric)


Orthogonal Regression F 35

Figure 4.6 continued

0.260643
0.3420214
2.0916501

corrb 3 rows 3 cols (numeric)

1 -0.941376 0.8722784
-0.941376 1 -0.981105
0.8722784 -0.981105 1

You can also use the Regress module to carry out an orthogonalized regression version of the previous
polynomial regression. In general, the columns of X are not orthogonal. You can use the ORPOL function
to generate orthogonal polynomials for the regression. Using them provides greater computing accuracy and
reduced computing times. When you use orthogonal polynomial regression, you can expect the statistics of
fit to be the same and expect the estimates to be more stable and uncorrelated.
To perform an orthogonal regression on the data, you must first create a vector that contains the values of
the independent variable x, which is the second column of the design matrix X. Then, use the ORPOL
function to generate orthogonal second degree polynomials. You can perform these operations by using the
following statements:

x1 = x[,2]; /* data = second column of X */


x = orpol(x1, 2); /* generates orthogonal polynomials */
reset noprint; /* turns off auto printing */
run Regress; /* runs Regress module */

reset print; /* turns on auto printing */


covb = xpxi*mse;
s = 1 / sqrt(vecdiag(covb));
corrb = diag(s)*covb*diag(s);
reset noprint;

Figure 4.7 Covariance and Correlation Matrices for Estimates

x1 5 rows 1 col (numeric)

1
2
3
4
5

x 5 rows 3 cols (numeric)

0.4472136 -0.632456 0.5345225


0.4472136 -0.316228 -0.267261
0.4472136 1.755E-17 -0.534522
0.4472136 0.3162278 -0.267261
0.4472136 0.6324555 0.5345225

Regression Results
36 F Chapter 4: Tutorial: A Module for Linear Regression

Figure 4.7 continued

sse dfe mse rsquare

6.4 2 3.2 0.9923518

Parameter Estimates

beta stdb t prob

33.093806 1.7888544 18.5 0.0029091


27.828043 1.7888544 15.556349 0.0041068
7.4833148 1.7888544 4.1833001 0.0526691

y yhat resid

1 1.2 -0.2
5 4 1
9 10.8 -1.8
23 21.6 1.4
36 36.4 -0.4

covb 3 rows 3 cols (numeric)

3.2 0 0
0 3.2 0
0 0 3.2

s 3 rows 1 col (numeric)

0.559017
0.559017
0.559017

corrb 3 rows 3 cols (numeric)

1 0 0
0 1 0
0 0 1

For these data, the off-diagonal values of the corrb matrix are displayed as zeros. For some analyses
you might find that certain matrix elements are very close to zero but not exactly zero because of the
computations of floating-point arithmetic. You can use the RESET FUZZ option to control whether small
values are printed as zeros.

Plotting Regression Results

SAS/IML software includes SAS/IML Studio, a environment for developing SAS/IML programs. SAS/IML
Studio includes high-level statistical graphics such as scatter plots, histograms, and bar charts. You can use
Low-Resolution Plots F 37

the SAS/IML Studio graphical user interface (GUI) to create graphs, or you can create and modify graphics
by writing programs. The GUI is described in the SAS/IML Studio Users Guide. See SAS/IML Studio for
SAS/STAT Users for an introduction to programming in SAS/IML Studio.
You can also produce high-resolution graphics by using the GXYPLOT module in the IMLMLIB library;
see Chapter 24, Module Library. Also see Chapter 15, Graphics Examples, for more information about
high-resolution graphics.
You can create some simple plots in PROC IML by using the PGRAF subroutine which produces low-
resolution scatter plots.

Low-Resolution Plots

You can continue the example of this chapter by using the PGRAF subroutine to create a low-resolution
plots.
The following statements plot the residual values versus the explanatory variable:

xy = x1 || resid;
reset linesize=78 pagesize=20;
call pgraf(xy,'r','x','Residuals','Plot of Residuals');

The first statement creates a matrix by using the horizontal concatenation operator (||) to concatenate x1
with resid. The two-column matrix xy contains the pairs of points that the PGRAF subroutine plots. The
PGRAF call produces the desired plot, as shown in Figure 4.8.

Figure 4.8 Residual Plot from the PGRAF Subroutine

Plot of Residuals
|
2 +
R |
e | r
s | r
i |
d |
u 0 +
a | r r
l |
s |
|
| r
-2 +
--------+------+------+------+------+------+------+------+------+--------
1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0

x
38 F Chapter 4: Tutorial: A Module for Linear Regression

The arguments to PGRAF are as follows:

 an n  2 matrix that contains the pairs of points

 a plotting symbol

 a label for the X axis

 a label for the Y axis

 a title for the plot

You can also plot the predicted values yO against x. You can create a matrix (say, xyh) that contains the points
to plot by concatenating x1 with yhat. The PGRAF subroutine plots the points, as shown in the following
statements. The resulting plot is shown in Figure 4.9.

xyh = x1 || yhat;
call pgraf(xyh,'*','x','Predicted','Plot of Predicted Values');

Figure 4.9 Predicted Value Plot from the PGRAF Subroutine

Plot of Predicted Values


|
40 +
P | *
r |
e |
d |
i |
c 20 + *
t |
e |
d | *
|
| *
0 + *
--------+------+------+------+------+------+------+------+------+--------
1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0

You can also use the PGRAF subroutine to create a low-resolution plot of the predicted and observed values
plotted against the explanatory variable, as shown in the following statements:

n = nrow(x1); /* number of observations */


newxy = (x1//x1) || (y//yhat); /* observed followed by predicted */
label = repeat('y',n,1) // repeat('p',n,1);/* 'y' followed by 'p' */
call pgraf(newxy,label,'x','y','Scatter Plot with Regression Line' );

The NROW function returns the number of rows of x1. The example creates a matrix newxy, which contains
the pairs of all observed values, followed by the pairs of predicted values. (Notice that you need to use both
the horizontal concatenation operator (||) and the vertical concatenation operator (//).) The matrix label
contains the character label for each point: a y for each observed point and a p for each predicted point.
SAS/IML Studio Graphics F 39

Finally, the PGRAF subroutine plots the observed and predicted values by using the corresponding symbols,
as shown in Figure 4.9.
For several points in Figure 4.8, the observed and predicted values are too close together to be distinguish-
able in the low-resolution plot.

Figure 4.10 Plot of Predicted and Observed Values

Scatter Plot with Regression Line


|
40 +
| y
|
|
|
y | y
20 + p
|
|
| y
| y
| p
0 + y
--------+------+------+------+------+------+------+------+------+--------
1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0

SAS/IML Studio Graphics

If you develop your SAS/IML programs in SAS/IML Studio, you can use high-level statistical graphics. For
example, the following statements create three scatter plots that duplicate the low-resolution plots created in
the previous section. Two of the plots are shown in Figure 4.11. The main steps in the program are indicated
by numbered comments; these steps are explained in the list that follows the program.

x = {1 1 1, 1 2 4, 1 3 9, 1 4 16, 1 5 25}; /* 1 */
y = {1, 5, 9, 23, 36};
x1 = x[,2]; /* data = second column of X */
x = orpol(x1,2); /* generates orthogonal polynomials */
run Regress; /* runs the Regress module */

declare DataObject dobj; /* 2 */


dobj = DataObject.Create("Reg", /* 3 */
{"x" "y" "Residuals" "Predicted"},
x1 || y || resid || yhat);

declare ScatterPlot p1, p2, p3;


p1 = ScatterPlot.Create(dobj, "x", "Residuals"); /* 4 */
p1.SetTitleText("Plot of Residuals", true);
40 F Chapter 4: Tutorial: A Module for Linear Regression

p2 = ScatterPlot.Create(dobj, "x", "Predicted"); /* 5 */


p2.SetTitleText("Plot of Predicted Values", true);

p3 = ScatterPlot.Create(dobj, "x", "y"); /* 6 */


p3.SetTitleText("Scatter Plot with Regression Line", true);
p3.DrawUseDataCoordinates();
p3.DrawLine(x1,yhat); /* 7 */

To completely understand this program, you should read SAS/IML Studio for SAS/STAT Users. The follow-
ing list describes the main steps of the program:

1. Use SAS/IML to create the data and run the Regress module.

2. Specify that the dobj variable is an object of the DataObject class. SAS/IML Studio extends the
SAS/IML language by adding object-oriented programming techniques.

3. Create an object of the DataObject class from SAS/IML vectors.

4. Create a scatter plot of the residuals versus the values of the explanatory variable.

5. Create a scatter plot of the predicted values versus the values of the explanatory variable.

6. Create a scatter plot of the observed responses versus the values of the explanatory variable.

7. Overlay a line for the predicted values.

Figure 4.11 Graphs Created by SAS/IML Studio


Chapter 5

Working with Matrices

Contents
Overview of Working with Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Entering Data as Matrix Literals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Scalars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Matrices with Multiple Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Using Assignment Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Simple Assignment Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Functions That Generate Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Index Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Using Matrix Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Compound Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Elementwise Binary Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Subscripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Subscript Reduction Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Displaying Matrices with Row and Column Headings . . . . . . . . . . . . . . . . . . . . . 61
The AUTONAME Option in the RESET Statement . . . . . . . . . . . . . . . . . . . 61
The ROWNAME= and COLNAME= Options in the PRINT Statement . . . . . . . . 62
The MATTRIB Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
More about Missing Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

Overview of Working with Matrices

SAS/IML software provides many ways to create matrices. You can create matrices by doing any of the
following:

 entering data as a matrix literal

 using assignment statements

 using functions that generate matrices

 creating submatrices from existing matrices with subscripts


42 F Chapter 5: Working with Matrices

 using SAS data sets (see Chapter 7, Working with SAS Data Sets, for more information)

Chapter 3, Understanding the SAS/IML Language, describes some of these techniques.


After you define matrices, you have access to many operators and functions for forming matrix expressions.
These operators and functions facilitate programming and enable you to refer to submatrices. This chapter
describes how to work with matrices in the SAS/IML language.

Entering Data as Matrix Literals

The simplest way to create a matrix is to define a matrix literal by entering the matrix elements. A matrix
literal can contain numeric or character data. A matrix literal can be a single element (called a scalar), a
single row of data (called a row vector), a single column of data (called a column vector), or a rectangular
array of data (called a matrix). The dimension of a matrix is given by its number of rows and columns. An
n  p matrix has n rows and p columns.

Scalars

Scalars are matrices that have only one element. You can define a scalar by typing the matrix name on the
left side of an assignment statement and its value on the right side. The following statements create and
display several examples of scalar literals:

proc iml;
x = 12;
y = 12.34;
z = .;
a = 'Hello';
b = "Hi there";
print x y z a b;

The output is displayed in Figure 5.1. Notice that you need to use either single quotes (') or double quotes
(") when defining a character literal. Using quotes preserves the case and embedded blanks of the literal. It
is also always correct to enclose data values within braces ({ }).

Figure 5.1 Examples of Scalar Quantities

x y z a b

12 12.34 . Hello Hi there


Matrices with Multiple Elements F 43

Matrices with Multiple Elements

To enter a matrix having multiple elements, use braces ({ }) to enclose the data values. If the matrix has
multiple rows, use commas to separate them. Inside the braces, all elements must be either numeric or
character. You cannot have a mixture of data types within a matrix. Each row must have the same number
of elements.
For example, suppose you have one week of data on daily coffee consumption (cups per day) for four people
in your office. Create a 4  5 matrix called coffee with each persons consumption represented by a row
of the matrix and each day represented by a column. The following statements use the RESET PRINT
command so that the result of each assignment statement is displayed automatically:

proc iml;
reset print;
coffee = {4 2 2 3 2,
3 3 1 2 1,
2 1 0 2 1,
5 4 4 3 4};

Figure 5.2 A 4  5 Matrix

coffee 4 rows 5 cols (numeric)

4 2 2 3 2
3 3 1 2 1
2 1 0 2 1
5 4 4 3 4

Next, you can create a character matrix called names with rows that contains the names of the coffee drinkers
in your office. Notice in Figure 5.3 that if you do not use quotes, characters are converted to uppercase.

names = {Jenny, Linda, Jim, Samuel};

Figure 5.3 A Column Vector of Names

names 4 rows 1 col (character, size 6)

JENNY
LINDA
JIM
SAMUEL

Notice that RESET PRINT statement produces output that includes the name of the matrix, its dimensions,
its type, and (when the type is character) the element size of the matrix. The element size represents the
length of each string, and it is determined by the length of the longest string.
Next display the coffee matrix using the elements of names as row names by specifying the ROWNAME=
option in the PRINT statement:
44 F Chapter 5: Working with Matrices

print coffee[rowname=names];

Figure 5.4 Rows of a Matrix Labeled by a Vector

coffee

JENNY 4 2 2 3 2
LINDA 3 3 1 2 1
JIM 2 1 0 2 1
SAMUEL 5 4 4 3 4

Using Assignment Statements

Assignment statements create matrices by evaluating expressions and assigning the results to a matrix. The
expressions can be composed of operators (for example, the matrix addition operator (+)), functions (for ex-
ample, the INV function), and subscripts. Assignment statements have the general form result = expression
where result is the name of the new matrix and expression is an expression that is evaluated. The resulting
matrix automatically acquires the appropriate dimension, type, and value. Details about writing expressions
are described in the section Using Matrix Expressions on page 50.

Simple Assignment Statements

Simple assignment statements involve an equation that has a matrix name on the left side and either an
expression or a function that generates a matrix on the right side.
Suppose that you want to generate some statistics for the weekly coffee data. If a cup of coffee costs 30
cents, then you can create a matrix with the daily expenses, dayCost, by multiplying the per-cup cost with
the matrix coffee. You can turn off the automatic printing so that you can customize the output with
the ROWNAME=, FORMAT=, and LABEL= options in the PRINT statement, as shown in the following
statements:

reset noprint;
dayCost = 0.30 # coffee; /* elementwise multiplication */
print dayCost[rowname=names format=8.2 label="Daily totals"];

Figure 5.5 Daily Cost for Each Employee

Daily totals

JENNY 1.20 0.60 0.60 0.90 0.60


LINDA 0.90 0.90 0.30 0.60 0.30
JIM 0.60 0.30 0.00 0.60 0.30
SAMUEL 1.50 1.20 1.20 0.90 1.20
Functions That Generate Matrices F 45

You can calculate the weekly total cost for each person by using the matrix multiplication operator (*).
First create a 5  1 vector of ones. This vector sums the daily costs for each person when multiplied with
the coffee matrix. (A more efficient way to do this is by using subscript reduction operators, which are
discussed in Using Matrix Expressions on page 50.) The following statements perform the multiplication:

ones = {1,1,1,1,1};
weektot = dayCost * ones; /* matrix-vector multiplication */
print weektot[rowname=names format=8.2 label="Weekly totals"];

Figure 5.6 Weekly Total for Each Employee

Weekly totals

JENNY 3.90
LINDA 3.00
JIM 1.80
SAMUEL 6.00

You might want to calculate the average number of cups consumed per day in the office. You can use the
SUM function, which returns the sum of all elements of a matrix, to find the total number of cups consumed
in the office. Then divide the total by 5, the number of days. The number of days is also the number
of columns in the coffee matrix, which you can determine by using the NCOL function. The following
statements perform this calculation:

grandtot = sum(coffee);
average = grandtot / ncol(coffee);
print grandtot[label="Total number of cups"],
average[label="Daily average"];

Figure 5.7 Total and Average Number of Cups for the Office

Total number of cups

49

Daily average

9.8

Functions That Generate Matrices

SAS/IML software has many useful built-in functions that generate matrices. For example, the J function
creates a matrix with a given dimension and specified element value. You can use this function to initialize
a matrix to a predetermined size. Here are several functions that generate matrices:

BLOCK creates a block-diagonal matrix


DESIGNF creates a full-rank design matrix
I creates an identity matrix
46 F Chapter 5: Working with Matrices

J creates a matrix of a given dimension


REPEAT creates a new matrix by repeating elements of the argument matrix
SHAPE shapes a new matrix from the argument

The sections that follow illustrate the functions that generate matrices. The output of each example is
generated automatically by using the RESET PRINT statement:

reset print;

The BLOCK Function

The BLOCK function has the following general form:


BLOCK (matrix1,< matrix2,. . . ,matrix15 >) ;

The BLOCK function creates a block-diagonal matrix from the argument matrices. For example, the fol-
lowing statements form a block-diagonal matrix:

a = {1 1, 1 1};
b = {2 2, 2 2};
c = block(a,b);

Figure 5.8 A Block-Diagonal Matrix

c 4 rows 4 cols (numeric)

1 1 0 0
1 1 0 0
0 0 2 2
0 0 2 2

The J Function

The J function has the following general form:


J (nrow < ,ncol < ,value > >) ;

It creates a matrix that has nrow rows, ncol columns, and all elements equal to value. The ncol and value
arguments are optional; if they are not specified, default values are used. In many statistical applications, it
is helpful to be able to create a row (or column) vector of ones. (You did so to calculate coffee totals in the
previous section.) You can do this with the J function. For example, the following statement creates a 5  1
column vector of ones:

ones = j(5,1,1);
Functions That Generate Matrices F 47

Figure 5.9 A Vector of Ones

ones 5 rows 1 col (numeric)

1
1
1
1
1

The I Function

The I function creates an identity matrix of a given size. It has the following general form:
I (dimension) ;

where dimension gives the number of rows. For example, the following statement creates a 3  3 identity
matrix:

I3 = I(3);

Figure 5.10 An Identity Matrix

I3 3 rows 3 cols (numeric)

1 0 0
0 1 0
0 0 1

The DESIGNF Function

The DESIGNF function generates a full-rank design matrix, which is useful in calculating ANOVA tables.
It has the following general form:
DESIGNF (column-vector) ;

For example, the following statement creates a full-rank design matrix for a one-way ANOVA, where the
treatment factor has three levels and there are n1 D 3, n2 D 2, and n3 D 2 observations at the factor levels:

d = designf({1,1,1,2,2,3,3});

Figure 5.11 A Design Matrix

d 7 rows 2 cols (numeric)


48 F Chapter 5: Working with Matrices

Figure 5.11 continued

1 0
1 0
1 0
0 1
0 1
-1 -1
-1 -1

The REPEAT Function

The REPEAT function creates a new matrix by repeating elements of the argument matrix. It has the
following syntax:
REPEAT (matrix, nrow, ncol) ;

The function repeats matrix a total of nrow  ncol times. The argument is repeated nrow times in the vertical
direction and ncol times in the horizontal direction. For example, the following statement creates a 4  6
matrix:

x = {1 2, 3 4};
r = repeat(x, 2, 3);

Figure 5.12 A Matrix of Repeated Values

r 4 rows 6 cols (numeric)

1 2 1 2 1 2
3 4 3 4 3 4
1 2 1 2 1 2
3 4 3 4 3 4

The SHAPE Function

The SHAPE function creates a new matrix by reshaping an argument matrix. It has the following general
form:
SHAPE (matrix, nrow < ,ncol < ,pad-value > >) ;

The ncol and pad-value arguments are optional; if they are not specified, default values are used. The
following statement uses the SHAPE function to create a 3  3 matrix that contains the values 99 and 33.
The function cycles back and repeats values to fill in the matrix when no pad-value is given.

aa = shape({99 33, 33 99}, 3, 3);

Figure 5.13 A Matrix of Repeated Values

aa 3 rows 3 cols (numeric)


Index Vectors F 49

Figure 5.13 continued

99 33 33
99 99 33
33 99 99

Alternatively, you can specify a value for pad-value that is used for filling in the matrix:

bb = shape({99 33, 33 99}, 3, 3, 0);

Figure 5.14 A Matrix Padded with Zeroes

bb 3 rows 3 cols (numeric)

99 33 33
99 0 0
0 0 0

The SHAPE function cycles through the argument matrix elements in row-major order and fills in the matrix
with zeros after the first cycle through the argument matrix.

Index Vectors

You can create a row vector by using the index operator (:). The following statements show that you can use
the index operator to count up, count down, or to create a vector of character values with numerical suffixes:

r = 1:5;
s = 10:6;
t = 'abc1':'abc5';

Figure 5.15 Row Vectors Created with the Index Operator

r 1 row 5 cols (numeric)

1 2 3 4 5

s 1 row 5 cols (numeric)

10 9 8 7 6

t 1 row 5 cols (character, size 4)

abc1 abc2 abc3 abc4 abc5


50 F Chapter 5: Working with Matrices

To create a vector based on an increment other than 1, use the DO function. For example, if you want a
vector that ranges from 1 to 1 by 0:5, use the following statement:

u = do(-1,1,.5);

Figure 5.16 Row Vector Created with the DO Function

u 1 row 5 cols (numeric)

-1 -0.5 0 0.5 1

Using Matrix Expressions

A matrix expression is a sequence of names, literals, operators, and functions that perform some calcula-
tion, evaluate some condition, or manipulate values. Matrix expressions can appear on either side of an
assignment statement.

Operators

Operators used in matrix expressions fall into three general categories:

Prefix operators are placed in front of operands. For example, -A uses the sign reversal prefix operator
( ) in front of the matrix A to reverse the sign of each element of A.
Binary operators are placed between operands. For example, A + B uses the addition binary operator
(+) between matrices A and B to add corresponding elements of the matrices.
Postfix operators are placed after an operand. For example, A uses the transpose postfix operator (`)
after the matrix A to transpose the matrix.

Matrix operators are described in detail in Chapter 23, Language Reference.


Table 5.1 shows the precedence of matrix operators in the SAS/IML language.

Table 5.1 Operator Precedence


Priority Group Operators
I (highest) ` subscripts (prefix) ## **
II * # <> >< / @
III +
IV k // :
V < <= > >= = =
VI &
VII (lowest) |
Compound Expressions F 51

Compound Expressions

With SAS/IML software, you can write compound expressions that involve several matrix operators and
operands. For example, the following statements are valid matrix assignment statements:

a = x+y+z;
a = x+y*z`;
a = (-x)#(y-z);

The rules for evaluating compound expressions are as follows:

 Evaluation follows the order of operator precedence, as described in Table 5.1. Group I has the highest
priority; that is, Group I operators are evaluated first. Group II operators are evaluated after Group I
operators, and so forth. Consider the following statement:

a = x+y*z;

This statement first multiplies matrices y and z since the * operator (Group II) has higher precedence
than the + operator (Group III). It then adds the result of this multiplication to the matrix x and assigns
the new matrix to a.
 If neighboring operators in an expression have equal precedence, the expression is evaluated from left
to right, except for the Group I operators. Consider the following statement:

a = x/y/z;

This statement first divides each element of matrix x by the corresponding element of matrix y. Then,
using the result of this division, it divides each element of the resulting matrix by the corresponding
element of matrix z. The operators in Group I, described in Table 5.1, are evaluated from right to left.
For example, the following expression is evaluated as .X2 /:

-x**2

When multiple prefix or postfix operators are juxtaposed, precedence is determined by their order
from inside to outside.
For example, the following expression is evaluated as .AJ/i; j :

a`[i,j]

 All expressions enclosed in parentheses are evaluated first, using the two preceding rules. Consider
the following statement:

a = x/(y/z);

This statement is evaluated by first dividing elements of y by the elements of z, then dividing this
result into x.
52 F Chapter 5: Working with Matrices

Elementwise Binary Operators

Elementwise binary operators produce a result matrix from element-by-element operations on two argument
matrices.
Table 5.2 lists the elementwise binary operators.

Table 5.2 Elementwise Binary Operators


Operator Description
C Addition; string concatenation
Subtraction
# Elementwise multiplication
## Elementwise power
= Division
<> Element maximum
>< Element minimum
| Logical OR
& Logical AND
< Less than
<D Less than or equal to
> Greater than
>D Greater than or equal to
= Not equal to
D Equal to

For example, consider the following two matrices:

   
2 2 4 5
AD ;B D
3 4 1 0

The addition operator .C/ adds corresponding matrix elements, as follows:


 
6 7
A C B is
4 4

The elementwise multiplication operator .#/ multiplies corresponding elements, as follows:


 
8 10
A#B is
3 0

The elementwise power operator .##/ raises elements to powers, as follows:


 
4 4
A##2 is
9 16
Subscripts F 53

The element maximum operator .<>/ compares corresponding elements and chooses the larger, as follows:
 
4 5
A <> B is
3 4

The less than or equal to operator .<D/ returns a 1 if an element of A is less than or equal to the corre-
sponding element of B, and returns a 0 otherwise:
 
1 1
A <D B is
0 0

All operators can work on scalars, vectors, or matrices, provided that the operation makes sense. For
example, you can add a scalar to a matrix or divide a matrix by a scalar. For example, the following
statement replaces each negative element of the matrix x with 0:

y = x#(x>0);

The expression x>0 is an operation that compares each element of x to (scalar) zero and creates a temporary
matrix of results; an element of the temporary matrix is 1 when the corresponding element of x is positive,
and 0 otherwise. The original matrix x is then multiplied elementwise by the temporary matrix, resulting in
the matrix y. To fully understand the intermediate calculations, you can use the RESET statement with the
PRINTALL option to have the temporary result matrices displayed.

Subscripts

Subscripts are special postfix operators placed in square brackets ([ ]) after a matrix operand. Subscript
operations have the general form operand[row,column] where

operand is usually a matrix name, but it can also be an expression or literal.


row refers to a scalar or vector expression that selects one or more rows from the operand.
column refers to a scalar or vector expression that selects one or more columns from the operand.

You can use subscripts to do any of the following:

 refer to a single element of a matrix

 refer to an entire row or column of a matrix

 refer to any submatrix contained within a matrix

 perform a reduction across rows or columns of a matrix. A reduction is a statistical operation (often a
sum or mean) applied to the rows or to the columns of a matrix.

In expressions, subscripts have the same (high) precedence as the transpose postfix operator (`). When both
row and column subscripts are used, they are separated by a comma. If a matrix has row or column names
associated with it from a MATTRIB or READ statement, then the corresponding row or column subscript
can also be a character matrix whose elements match the names of the rows or columns to be selected.
54 F Chapter 5: Working with Matrices

Selecting a Single Element

You can select a single element of a matrix in several ways. You can use two subscripts (row, column) to
refer to its location, or you can use one subscript to index the elements in row-major order.
For example, for the coffee example used previously in this chapter, there are several ways to find the
element that corresponds to the number of cups that Samuel drank on Monday.
First, you can refer to the element by row and column location. In this case, you want the fourth row and
first column. The following statements extract the datum and place it in the matrix c41:

coffee={4 2 2 3 2, 3 3 1 2 1, 2 1 0 2 1, 5 4 4 3 4};
names={Jenny, Linda, Jim, Samuel};
print coffee[rowname=names];
c41 = coffee[4,1];
print c41;

Figure 5.17 Datum Extracted from a Matrix

coffee

JENNY 4 2 2 3 2
LINDA 3 3 1 2 1
JIM 2 1 0 2 1
SAMUEL 5 4 4 3 4

c41

You can also use row and column names, which can be assigned with an MATTRIB statement as follows:

mattrib coffee rowname=names


colname={'MON' 'TUE' 'WED' 'THU' 'FRI'};
cSamMon = coffee['SAMUEL','MON'];
print cSamMon;

Figure 5.18 Datum Extracted from a Matrix with Assigned Attributes

cSamMon

You can also look for the element by enumerating the elements of the matrix in row-major order. In this
case, you refer to this element as the sixteenth element of coffee:

c16 = coffee[16];
print c16;
Subscripts F 55

Figure 5.19 Datum Extracted from a Matrix by Specifying the Element Number

c16

Selecting a Row or Column

To refer to an entire row of a matrix, specify the subscript for the row but omit the subscript for the column.
For example, to refer to the row of the coffee matrix that corresponds to Jim, you can specify the submatrix
that consists of the third row and all columns. The following statements extract and print this submatrix:

jim = coffee[3,];
print jim;

Alternately, you can use the row names assigned by the MATTRIB statement. Both results are shown in
Figure 5.20.

jim2 = coffee['JIM',];
print jim2;

Figure 5.20 Row Extracted from a Matrix

jim

2 1 0 2 1

jim2

2 1 0 2 1

If you want to extract the data for Friday, you can specify the subscript for the fifth column. You omit the
row subscript to indicate that the operation applies to all rows. The following statements extract and print
this submatrix:

friday = coffee[,5];
print friday;

Figure 5.21 Column Extracted from a Matrix

friday

2
1
1
4

Alternatively, you could also index by the column name as follows:

friday = coffee[,'FRI'];
56 F Chapter 5: Working with Matrices

Submatrices

You refer to a submatrix by specifying the rows and columns that determine the submatrix. For example,
to create the submatrix of coffee that consists of the first and third rows and the second, third, and fifth
columns, use the following statements:

submat1 = coffee[{1 3}, {2 3 5}];


print submat1;

Figure 5.22 Submatrix Extracted from a Matrix

submat1

2 2 2
1 0 1

The first vector, {1 3}, selects the rows and the second vector, {2 3 5}, selects the columns. Alternately,
you can create the vectors of indices and use them to extract the submatrix, as shown in following state-
ments:

rows = {1 3};
cols = {2 3 5};
submat1 = coffee[rows,cols];

You can also use the row and column names:

rows = {'JENNY' 'JIM'};


cols = {'TUE' 'WED' 'FRI'};
submat1 = coffee[rows, cols];

You can use index vectors generated by the index creation operator (:) in subscripts to refer to successive
rows or columns. For example, the following statements extract the first three rows and last three columns
of coffee:

submat2 = coffee[1:3, 3:5];


print submat2;

Figure 5.23 Submatrix of Contiguous Rows and Columns

submat2

2 3 2
1 2 1
0 2 1

Selecting Multiple Elements

All SAS/IML matrices are stored in row-major order. This means that you can index multiple elements of a
matrix by listing the position of the elements in an n  p matrix. The elements in the first row have positions
Subscripts F 57

1 through p, the elements in the second row have positions p C 1 through 2p, and the elements in the last
row have positions .n 1/p C 1 through np.
For example, in the coffee data discussed previously, you might be interested in finding occurrences for
which some person (on some day) drank more than two cups of coffee. The LOC function is useful for
creating an index vector for a matrix that satisfies some condition. The following statement uses the LOC
function to find the data that satisfy the desired criterion:

h = loc(coffee > 2);


print h;

Figure 5.24 Indices That Correspond to a Criterion

h
COL1 COL2 COL3 COL4 COL5

ROW1 1 4 6 7 16

h
COL6 COL7 COL8 COL9

ROW1 17 18 19 20

The row vector h contains indices of the coffee matrix that satisfy the criterion. If you want to find the
number of cups of coffee consumed on these occasions, you need to subscript the coffee matrix with the
indices, as shown in the following statements:

cups = coffee[h];
print cups;

Figure 5.25 Values That Correspond to a Criterion

cups

4
3
3
3
5
4
4
3
4

Notice that SAS/IML software returns a column vector when a matrix is subscripted by a single array of
indices. This might surprise you, but clearly the cups matrix cannot be the same shape as the coffee matrix
since it contains a different number of elements. Therefore, the only reasonable alternative is to return either
a row vector or a column vector. Either would be a valid choice; SAS/IML software returns a column vector.
Even if the original matrix is a row vector, the subscripted matrix will be a column vector, as the following
example shows:
58 F Chapter 5: Working with Matrices

v = {-1 2 5 -2 7}; /* v is a row vector */


v2 = v[{1 3 5}]; /* v2 is a column vector */
print v2;

Figure 5.26 Column Vector of Extracted Values

v2

-1
5
7

If you want to index into a row vector and you want the resulting variable also to be a row vector, then use
the following technique:

v3 = v[ ,{1 3 5}]; /* Select columns. Note the comma. */


print v3;

Figure 5.27 Row Vector of Extracted Values

v3

-1 5 7

Subscripted Assignment

You can assign values into a matrix by using subscripts to refer to the element or submatrix. In this type of
assignment, the subscripts appear on the left side of the equal sign. For example, to assign the value 4 in
the first row, second column of coffee, use subscripts to refer to the appropriate element in an assignment
statement, as shown in the following statements and in Figure 5.27:

coffee[1,2] = 4;
print coffee;

To change the values in the last column of coffee to zeros, use the following statements:

coffee[,5] = {0,0,0,0}; /* alternatively: coffee[,5] = 0; */


print coffee;

Figure 5.28 Matrices after Assigning Values to Elements

coffee

4 4 2 3 2
3 3 1 2 1
2 1 0 2 1
5 4 4 3 4
Subscript Reduction Operators F 59

Figure 5.28 continued

coffee

4 4 2 3 0
3 3 1 2 0
2 1 0 2 0
5 4 4 3 0

In the next example, you locate the negative elements of a matrix and set these elements to zero. (This
can be useful in situations where negative elements might indicate errors.) The LOC function is useful for
creating an index vector for a matrix that satisfies some criterion. The following statements use the LOC
function to find and replace the negative elements of the matrix T:

t = {3 2 -1,
6 -4 3,
2 2 2 };
i = loc(t<0);
print i;
t[i] = 0;
print t;

Figure 5.29 Results of Finding and Replacing Negative Values

3 5

3 2 0
6 0 3
2 2 2

Subscripts can also contain expressions. For example, the previous example could have been written as
follows:

t[loc(t<0)] = 0;

If you use a noninteger value as a subscript, only the integer portion is used. Using a subscript value less
than one or greater than the dimension of the matrix results in an error.

Subscript Reduction Operators

A reduction operator is a statistical operation (for example, a sum or a mean) that returns a matrix of a
smaller dimension. Reduction operators are often encountered in frequency tables: the marginal frequencies
represent the sum of the frequencies across rows or down columns.
60 F Chapter 5: Working with Matrices

In SAS/IML software, you can use reduction operators in place of values for subscripts to get reductions
across all rows or columns. Table 5.3 lists operators for subscript reduction.

Table 5.3 Subscript Reduction Operators


Operator Description
C Addition
# Multiplication
<> Maximum
>< Minimum
<W> Index of maximum
>W< Index of minimum
W Mean
## Sum of squares

For example, to get row sums of a matrix X, you can sum across the columns with the syntax X[,+].
Omitting the first subscript specifies that the operator apply to all rows. The second subscript (+) specifies
that summation reduction take place across the columns. The elements in each row are added, and the new
matrix consists of one column that contains the row sums.
To give a specific example, consider the coffee data from earlier in the chapter. The following statements
use the summation reduction operator to compute the sums for each row:

coffee={4 2 2 3 2, 3 3 1 2 1, 2 1 0 2 1, 5 4 4 3 4};
names={Jenny, Linda, Jim, Samuel};
mattrib coffee rowname=names colname={'MON' 'TUE' 'WED' 'THU' 'FRI'};
Total = coffee[,+];
print coffee Total;

Figure 5.30 Summation across Columns to Find the Row Sums

coffee MON TUE WED THU FRI Total

JENNY 4 2 2 3 2 13
LINDA 3 3 1 2 1 10
JIM 2 1 0 2 1 6
SAMUEL 5 4 4 3 4 20

You can use these reduction operators to reduce the dimensions of rows, columns, or both. When both rows
and columns are reduced, row reduction is done first.
For example, the expression AC; <> results in the maximum .<>/ of the column sums .C/.
You can repeat reduction operators. To get the sum of the row maxima, use the expression
A; <>C; , or, equivalently, A; <>C.
A subscript such as Af2 3g; C first selects the second and third rows of A and then finds the row sums of
that submatrix.
Displaying Matrices with Row and Column Headings F 61

The following examples demonstrate how to use the operators for subscript reduction. Consider the follow-
ing matrix:
2 3
0 1 2
AD4 5 4 3 5
7 6 8

The following statements are true:


 
12
Af2 3g; C is (row sums for rows 2 and 3)
21
 
AC; <> is 13 (maximum of column sums)
 
A<>; C is 21 (sum of column maxima)
 
A; ><C; is 9 (sum of row minima)
2 3
3
A; <W> is 4 1 5 (indices of row maxima)
3
 
A>W<; is 1 1 1 (indices of column minima)
 
AW is 4 (mean of all elements)

Displaying Matrices with Row and Column Headings

You can customize the way matrices are displayed with the AUTONAME option, with the ROWNAME=
and COLNAME= options, or with the MATTRIB statement.

The AUTONAME Option in the RESET Statement

You can use the RESET statement with the AUTONAME option to automatically display row and column
headings. If your matrix has n rows and p columns, the row headings are ROW1 to ROWn and the column
headings are COL1 to COLp. For example, the following statements produce the subsequent matrix:

coffee={4 2 2 3 2, 3 3 1 2 1, 2 1 0 2 1, 5 4 4 3 4};
reset autoname;
print coffee;
62 F Chapter 5: Working with Matrices

Figure 5.31 Result of the AUTONAME Option

coffee
COL1 COL2 COL3 COL4 COL5

ROW1 4 2 2 3 2
ROW2 3 3 1 2 1
ROW3 2 1 0 2 1
ROW4 5 4 4 3 4

The ROWNAME= and COLNAME= Options in the PRINT Statement

You can specify your own row and column headings. The easiest way is to create vectors that contain the
headings and then display the matrix by using the ROWNAME= and COLNAME= options in the PRINT
statement. For example, the following statements display row names and column names for a matrix:

names={Jenny, Linda, Jim, Samuel};


days={Mon Tue Wed Thu Fri};
mattrib coffee rowname=names colname=days;
print coffee;

Figure 5.32 Result of the ROWNAME= and COLNAME= Options

coffee
MON TUE WED THU FRI

JENNY 4 2 2 3 2
LINDA 3 3 1 2 1
JIM 2 1 0 2 1
SAMUEL 5 4 4 3 4

The MATTRIB Statement

The MATTRIB statement associates printing characteristics with matrices. You can use the MATTRIB
statement to display coffee with row and column headings. In addition, you can format the displayed
numeric output and assign a label to the matrix name. The following example shows how to customize your
displayed output:

mattrib coffee rowname=names


colname=days
label='Weekly Coffee'
format=2.0;
print coffee;
More about Missing Values F 63

Figure 5.33 Result of the MATTRIB Statement

Weekly Coffee
MON TUE WED THU FRI

JENNY 4 2 2 3 2
LINDA 3 3 1 2 1
JIM 2 1 0 2 1
SAMUEL 5 4 4 3 4

More about Missing Values

Missing values in matrices are discussed in Chapter 3, Understanding the SAS/IML Language. You should
carefully read that chapter and Chapter 22, Further Notes, so that you are aware of the way SAS/IML soft-
ware handles missing values. The following examples show how missing values are handled for elementwise
operations and for subscript reduction operators.
Consider the following two matrices X and Y:
2 3 2 3
1 2 : 4 : 2
X D 4 : 5 6 5Y D 4 2 1 3 5
7 : 9 6 : 5

The following operations handle missing values in matrices:


2 3
5 : :
Matrix addition: X C Y is 4 : 6 9 5
13 : 14
2 3
4 : :
Elementwise multiplication: X#Y is 4 : 5 18 5
42 : 45
 
Subscript reduction: XC; is 8 7 15
64
Chapter 6

Programming Statements

Contents
Overview of Programming Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Selection Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Compound Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Iteration Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Jump Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Statements That Define and Execute Modules . . . . . . . . . . . . . . . . . . . . . . . . . 72
Defining and Executing a Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Understanding Symbol Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Modules with No Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Modules with Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Nesting Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Understanding Argument Passing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Storing and Loading Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Termination Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
PAUSE Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
STOP Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
ABORT Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
QUIT Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

Overview of Programming Statements

The SAS/IML programming language has control statements that enable you to control the path of execution
in a program. The control statements in the SAS/IML language are similar to the corresponding statements
in the SAS DATA step. This chapter describes the following concepts:

 selection statements

 compound statements

 iteration statements

 jump statements
66 F Chapter 6: Programming Statements

 statements that define and execute modules

 termination statements

Selection Statements

Selection statemements choose one of several control paths in a program. The SAS/IML language supports
the IF-THEN and the IF-THEN/ELSE statements. You can use an IF-THEN statement to test an expression
and to conditionally perform an operation. You can also optionally specify an ELSE statement. The general
form of the IF-THEN/ELSE statement is as follows:
IF expression THEN statement1 ;

ELSE statement2 ;

The expression is evaluated first. If the value of expression is true (which means nonzero and nonmissing),
the THEN statement is executed. If the value of expression is false (which means zero or missing), the
ELSE statement (if present) is executed. If an ELSE statement is not present, control passes to the next
statement in the program.
The expression in an IF-THEN statement is often a comparison, as in the following example:

a = {17 22, 13 10};


if max(a)<20 then
p = 1;
else p = 0;

The IF clause evaluates the expression max(a)<20. If all values of the matrix a are less than 20, p is set to
1. Otherwise, p is set to 0. For the given values of a, the IF condition is false, since a[1,2] is not less than
20.
You can nest IF-THEN statements within the clauses of other IF-THEN or ELSE statements. Any number
of nesting levels is allowed. The following is an example of nested IF-THEN statements:

w = 0;
if n>0 then
if x>y then w = x;
else w = y;

There is an ambiguity associated with the previous statements. Is the ELSE statement associated with the
first IF-THEN statement or the second? As the indenting indicates, an ELSE statement is associated with
the closest previous IF-THEN statement. (If you want the ELSE statement to be associated with the first
IF-THEN statement, you need to use a DO group, as described in the next section.)
When the condition to be evaluated is a matrix expression, the result of the evaluation is a temporary matrix
of zeros, ones, and possibly missing values. If all values of the result matrix are nonzero and nonmissing,
the condition is true; if any element in the result matrix is zero or missing, the condition is false. This
evaluation is equivalent to using the ALL function. For example, the following two statements produce the
same result:
Compound Statements F 67

if x<y then statement;


if all(x<y) then statement;
If you are testing whether at least one element in x is less than the corresponding element in y, use the ANY
function, as shown in the following statement:
if any(x<y) then statement;

Compound Statements

Several statements can be grouped together into a compound statement (also called a block or a DO group).
You use a DO statement to define the beginning of a DO group and an END statement to define the end. DO
groups have two principal uses:

 to group a set of statements so that they are executed as a unit

 to group a set of statements for a conditional (IF-THEN/ELSE) clause

DO groups have the following general form:


DO ;

statements ;

END ;

As with IF-THEN/ELSE statements, you can nest DO groups to any number of levels. The following is an
example of nested DO groups:

do;
statements;
do;
statements;
end;
statements;
end;

It is a good programming convention to indent the statements in DO groups as shown, so that each state-
ments position indicates its level of nesting.
For IF-THEN/ELSE conditionals, DO groups can be used as units for either the THEN or ELSE clauses
so that you can perform many statements as part of the conditional action, as shown in the following state-
ments:

if x<y then
do;
z1 = abs(x+y);
68 F Chapter 6: Programming Statements

z2 = abs(x-y);
end;
else
do;
z1 = abs(x-y);
z2 = abs(x+y);
end;

An alternative formulation that requires less indented space is to write the DO statement on the same line as
the THEN or ELSE clause, as shown in following statements:

if x<y then do;


z1 = abs(x+y);
z2 = abs(x-y);
end;
else do;
z1 = abs(x-y);
z2 = abs(x+y);
end;

For some programming applications, you must use either a DO group or a module. For example, LINK and
GOTO statements must be programmed inside a DO group or a module.

Iteration Statements

The DO statement supports clauses that iterate over compound statements. With an iterative DO statement,
you can repeatedly execute a set of statements until some condition stops the execution. The following table
lists the different kinds of iteration statements in the SAS/IML language:

Clause DO Statement
DATA DO DATA statement
variable = start TO stop < BY increment > Iterative DO statement
WHILE(expression) DO WHILE statement
UNTIL(expression) DO UNTIL statement

A DO statement can have any combination of these four iteration clauses, but the clauses must be specified
in the order listed in the preceding table.

DO DATA Statements

The general form of the DO DATA statement is as follows:


DO DATA ;

The DATA keyword specifies that iteration stops when an end-of-file condition occurs. Other DO specifica-
tions exit after tests are performed at the top or bottom of a loop.
Iteration Statements F 69

See Chapter 7, Working with SAS Data Sets, and Chapter 8, File Access, for more information about
processing data.
You can use the DO DATA statement to read data from an external file or to process observations from a
SAS data set. In the DATA step in Base SAS software, the iteration is usually implied. The DO DATA
statement simulates this iteration until the end of a file is reached.
The following example reads data from an external file named MyData.txt that contains the following data:

Rick 2.40 3.30


Robert 3.90 4.00
Simon 3.85 4.25

The data values are read one at a time into the scalar variables Name, x, and y.

filename MyFile 'MyData.txt';


infile MyFile; /* infile statement */
do data; /* begin read loop */
input Name $6. x y; /* read a data value */
/* do something with each value */
end;
closefile MyFile;

Iterative DO Statements

The general form of the iterative DO statement is as follows:


DO variable=start TO stop < BY increment > ;

The value of the variable matrix is initialized to the value of the start matrix. This value is then incremented
by the increment value (or by 1 if increment is not specified) until it is greater than or equal to the stop value.
(If increment is negative, then the iterations stop when the value is less than or equal to stop.)
For example, the following statement specifies a DO loop that initializes i to the value 1 and increments i
by 2 after each loop. The loop ends when the value of i is greater than 10.

y = 0;
do i = 1 to 10 by 2;
y = y + i;
end;

DO WHILE Statements

The general form of the DO WHILE statement is as follows:


DO WHILE expression ;

With a WHILE clause, the expression is evaluated at the beginning of each loop, with iterations continuing
until the expression is false (that is, until the expression contains a zero or a missing value). Note that if the
expression is false the first time it is evaluated, the loop is not executed.
70 F Chapter 6: Programming Statements

For example, the following statements initialize count to 1 and then increment count four times:

count = 1;
do while(count<5);
count = count+1;
end;

DO UNTIL Statements

The general form of the DO UNTIL statement is as follows:


DO UNTIL expression ;

The UNTIL clause is like the WHILE clause except that the expression is evaluated at the bottom of the
loop. This means that the loop always executes at least once.
For example, the following statements initialize count to 1 and then increment count five times:

count = 1;
do until(count>5);
count = count+1;
end;

Jump Statements

During normal execution, each statement in a program is executed in sequence, one after another. The
GOTO and LINK statements cause a SAS/IML program to jump from one statement in a program to another
statement without executing intervening statements. The place to which execution jumps is identified by a
label, which is a name followed by a colon placed before an executable statement. You can program a jump
by using either the GOTO statement or the LINK statement:
GOTO label ;

LINK label ;

Both the GOTO and LINK statements instruct SAS/IML software to jump immediately to a labeled state-
ment. However, if you use a LINK statement, then the program returns to the statement following the LINK
statement when the program executes a RETURN statement. The GOTO statement does not have this fea-
ture. Thus, the LINK statement provides a way of calling sections of code as if they were subroutines.
The statements that define the subroutine begin with the label and end with a RETURN statement. LINK
statements can be nested within other LINK statements; any number of nesting levels is allowed.
N OTE : The GOTO and LINK statements must be inside a module or DO group. These statements must be
able to resolve the referenced label within the current unit of statements. Although matrix symbols can be
shared across modules, statement labels cannot. Therefore, all GOTO statement labels and LINK statement
labels must be local to the DO group or module.
Jump Statements F 71

The GOTO and LINK statements are not often used because you can usually write more understandable
programs by using DO groups and modules. The following statements shows an example of using the
GOTO statement, followed by an equivalent set of statements that do not use the GOTO statement:

x = -2;
do;
if x<0 then goto negative;
y = sqrt(x);
print y;
goto TheEnd;
negative:
print "Sorry, value is negative";
TheEnd:
end;

/* same logic, but without using a GOTO statement */


if x<0 then print "Sorry, value is negative";
else do;
y = sqrt(x);
print y;
end;

The output of each section of the program is identical. It is shown in Figure 6.1.

Figure 6.1 Output That Demonstrates the GOTO Statement

Sorry, value is negative

Sorry, value is negative

The following statements show an example of using the LINK statements. You can also rewrite the state-
ments in a way that avoids using the LINK statement.

x = -2;
do;
if x<0 then link negative;
y = sqrt(x);
print y;
goto TheEnd;
negative:
print "Using absolute value of x";
x = abs(x);
return;
TheEnd:
end;

The output of the program is shown in Figure 6.2.


72 F Chapter 6: Programming Statements

Figure 6.2 Output That Demonstrates the LINK Statement

Using absolute value of x

1.4142136

Statements That Define and Execute Modules

Modules are used for two purposes:

 to create a user-defined subroutine or function. That is, you can define a group of statements that can
be called from anywhere in the program.

 to define variables that are local to the module. That is, you can create a separate environment with
its own symbol table (see Understanding Symbol Tables on page 73).

A module always begins with a START statement and ends with a FINISH statement. A module is either
a function or a subroutine. When a module returns a single parameter, it is called a function. A function is
invoked by its name in an assignment statement. Otherwise, a module is a subroutine. You can execute a
subroutine by using either the RUN statement or the CALL statement.

Defining and Executing a Module

A module definition begins with a START statement, which has the following general form:
START < name > < ( arguments ) > < GLOBAL( arguments ) > ;

A module definition ends with a FINISH statement, which has the following general form:
FINISH < name > ;

If no name appears in the START statement, the name of the module defaults to MAIN. If no name appears
on the FINISH statement, the name of the most recently defined module is used.
There are two ways you can execute a module: you can use either a RUN statement or a CALL statement.
The general forms of these statements are as follows:
RUN name < ( arguments ) > ;

CALL name < ( arguments ) > ;

The only difference between the RUN and CALL statements is the order of resolution. The RUN statement
is resolved in the following order:
Understanding Symbol Tables F 73

1. user-defined module

2. SAS/IML built-in subroutine

In contrast, the CALL statement is resolved in the opposite order:

1. SAS/IML built-in subroutine

2. user-defined module

In other words, if you define a module with the same name as a SAS/IML subroutine, you can use the RUN
statement to call the user-defined module and the CALL statement to call the built-in subroutine.
The RUN and CALL statements must have arguments that correspond to the ones defined in the START
statement. A module can call other modules provided that it never recursively calls itself.
After the last statement in a module is executed, control returns to the statement that initially called the
module. You can also force a return from a module by using the RETURN statement.

Understanding Symbol Tables

The scope of a variable is the set of locations in a program where a variable can be referenced. A variable
defined outside of any module is said to exist at the programs main scope. For a variable defined inside a
module, the scope of the variable is the body of the module.
A symbol is the name of a SAS/IML matrix. For example, if x and y are matrices, then the names x and
y are the symbols. Whenever a matrix is defined, its symbol is stored in a symbol table. There are two
kinds of symbol tables. When a matrix is defined at the main scope, its name is stored in the global symbol
table. In contrast, each module with arguments is given its own local symbol table that contains all symbols
used inside the module.
There can be many local symbol tables, one for each module with arguments. (Modules without arguments
are described in the next section.) You can have a symbol x in the global table and the same symbol in a
local table, but these correspond to separate matrices. By default, the value of the matrix at global scope is
independent from the value of a local matrix of the same name that is defined inside a module. Similarly,
you can have two modules that each use the matrix x, and these matrices are not related. You can force a
module to use a variable at main scope by using a GLOBAL clause as described in Using the GLOBAL
Clause on page 77.
Values of symbols in a local table are temporary; that is, they exist only while the module is executing. You
can no longer access the value of a local variable after the module exits.
74 F Chapter 6: Programming Statements

Modules with No Arguments

The previous section emphasized that modules with arguments are given a local symbol table. In contrast,
a module that has no arguments shares the global symbol table. All variables in such a module are global,
which implies that if you modify the value of a matrix inside the module, that change persists when the
module exits.
The following example shows a module with no arguments:

/* module without arguments, all symbols are global. */


proc iml;
a = 10; /* a is global */
b = 20; /* b is global */
c = 30; /* c is global */
start Mod1; /* begin module */
p = a+b; /* p is global */
c = 40; /* c already global */
finish; /* end module */

run Mod1; /* run the module */


print a b c p;

Figure 6.3 Output from Module with Global Variables

a b c p

10 20 40 30

Notice that after the module exits, the following conditions exist:

 a is still 10.

 b is still 20.

 c has been changed to 40.

 p is created, added to the global symbol table, and set to 30.

Modules with Arguments

Most modules contain one or more arguments, and therefore contain a separate local symbol table. The
following statements apply to modules with arguments:

 You can specify arguments as variable names, expressions, or literal values.

 If you specify several arguments, use commas to separate them.


Modules with Arguments F 75

 Arguments are passed by reference, not by value. This means that a module can change the value of
an argument. An argument that is modified by a module is called an output argument.

 If you have both output arguments and input arguments, the SAS/IML convention is to list the output
arguments first.

 When a module is run, the input arguments can be a matrix name, expression, or literal. However,
you should specify only matrix names for output arguments.

When a module is run, the value for each argument is transferred from the global symbol table to the local
symbol table. For example, consider the module Mod2 defined in the following statements:

proc iml;
a = 10;
b = 20;
c = 30;
start Mod2(x,y); /* begin module */
p = x+y;
x = 100; /* change the value of an argument */
c = 25;
finish Mod2; /* end module */

run Mod2(a,b);
print a b c;

The first three statements are submitted in the main scope and define the variables a, b, and c. The values of
these variables are stored in the global symbol table. The START statement begins the definition of Mod2
and lists two variables (x and y) as arguments. This creates a local symbol table for Mod2. All symbols
used inside the module (x, y, p, and c) are in the local symbol table. There is also a correspondence between
the arguments in the RUN statement (a and b) and the arguments in the START statement (x and y). Also
note that a and b exist only in the global symbol table, whereas x, y, and p exist only in the local symbol
table. The symbol c exists in both symbol tables, but the values are completely independent.
When Mod2 is executed with the RUN statement, the local variable x becomes the owner of the data in
the global matrix a. Similarly, the local variable y becomes the owner of the data in b. Because c is not an
argument, there is no correspondence between the value of c in the global table and the value of c in the
local table. When the module finishes execution, the variables a and b at main scope regain ownership of
the data in x and y, respectively. The local symbol table that contains x and y is deleted. If the data were
modified within the module, the values of a and b reflect the change, as shown in Figure 6.4.

Figure 6.4 Output from Module with Arguments

a b c

100 20 30

Notice that after the module is executed, the following are true:

 a is changed to 100 since the corresponding argument, x, was changed to 100 inside the module.
76 F Chapter 6: Programming Statements

 b is still 20.

 c is still 30. Inside the module, the local symbol c was set to 25, but there is no correspondence
between the global symbol c and the local symbol c.

Also note that, inside the module, the symbols a and b do not exist. Outside the module, the symbols p, x,
and y do not exist.

Defining Function Modules

Functions are special modules that return a single value. To write a function module, include a RETURN
statement that specifies the value to return. The RETURN statement is necessary for a module to be a
function. You can use a function module in an assignment statement, as you would a built-in function.
The symbol-table logic described in the preceding section also applies to function modules. In the following
function module, the value of c in the local symbol table is assigned to the symbol z at main scope:

proc iml;
a = 10;
b = 20;
c = 30;
start Mod3(x,y);
c = 2#x + y;
return (c); /* return function value */
finish Mod3;

z = Mod3(a,b); /* call function */


print a b c z;

Figure 6.5 Output from a Function Module

a b c z

10 20 30 40

Notice the following about this example:

 a is still 10 and b is still 20.

 c is still 30. The symbol c in the global table has no connection with the symbol c in the local table.

 z assigned the value 40, which is the value returned by the module.

Again notice that, inside the module, the symbols a, b, and z do not exist. Outside the module, the symbols
x and y do not exist.

In the next example, you define your own function module, Add, which adds its two arguments:
Modules with Arguments F 77

proc iml;
start Add(x,y);
sum = x+y;
return(sum);
finish;

a = {9 2, 5 7};
b = {1 6, 8 10};
c = Add(a,b);
print c;

Figure 6.6 Output from a Function Module

10 8
13 17

Function modules can also be called as arguments to other modules or to built-in functions. For example,
in the following statements, the Add module is called twice, and the results from those calls are used as
arguments to call the Add module a third time:

d = Add(Add({1 2},{3 4}), Add({5 6}, {7 8}));


print d;

Figure 6.7 Output from a Nested Module Call

16 20

Functions are resolved in the following order:

1. SAS/IML built-in functions

2. user-defined function modules

3. SAS DATA step functions

Because of this order of resolution, it is an error to try to define a function module that has the same name
as a SAS/IML built-in function.

Using the GLOBAL Clause

For modules with arguments, the variables used inside the module are local and have no connection with
any variables that exist outside the module in the global table. However, it is possible to specify that certain
variables not be placed in the local symbol table but rather be accessed from the global table. The GLOBAL
clause specifies variables that you want to share between local and global symbol tables. The following
78 F Chapter 6: Programming Statements

is an example of a module that uses a GLOBAL clause to define the symbol c as global. This defines a
one-to-one correspondence between the value of c in the global symbol table and the value of c in the local
symbol table.

proc iml;
a = 10;
b = 20;
c = 30;
start Mod4(x,y) global (c);
x = 100;
c = 40;
b = 500;
finish Mod4;

run Mod4(a,b);
print a b c;

The output is shown in Figure 6.8.

Figure 6.8 Output from a Module with a GLOBAL Clause

a b c

100 20 40

After the module is called, the following facts are true:

 a is changed to 100.

 b is still 20 and not 500, since b exists independently in the global and local symbol tables.

 c is changed to 40 because it was declared to be a global variable. The matrix c inside the module is
the same matrix as the one outside the module.

Because every module with arguments has its own local table, it is possible to have many local tables. You
can use the GLOBAL clause with many (or all) modules to share a single global variable among many local
symbol tables.

Nesting Modules

You can nest one module within another. You must make sure that each nested module is completely
contained inside the parent module. Each module is defined independently of the others. When you nest
modules, it is a good idea to indent the statements relative to the nesting level, as shown in the following
example:
Nesting Modules F 79

start ModA;
start ModB;
x = 1;
finish ModB;
run ModB;
finish ModA;

run ModA;

In this example, SAS/IML software starts parsing statements for a module called ModA. In the middle of
this module, it recognizes the start of a new module called ModB. It parses ModB until it encounters the
first FINISH statement. It then finishes parsing ModA. Thus, it behaves the same as if ModB were parsed
prior to ModA, as follows:

start ModB;
x = 1;
finish ModB;

start ModA;
run ModB;
finish ModA;

run ModA;

In particular, you can call the ModB module from the programs main scope. It is not the case that ModB is
a local module known only to module ModA. There is no such thing as a local module.

Calling a Module from Another Module

Consider the following example of calling one module from another module:

proc iml;
start Mod5(a,b);
c = a+b;
d = a-b;
run Mod6(c,d);
print "In Mod5:" c d;
finish;

start Mod6(x,y);
x = x#y;
finish;

run Mod5({1 2}, {3 4});

When one module calls another, you can pass in any symbol defined in the scope of the calling module. In
the previous example, the Mod5 module calls the Mod6 module and passes in the local variables c and d.
The Mod6 module multiplies its arguments and overwrites the first argument, as shown in Figure 6.9.
80 F Chapter 6: Programming Statements

Figure 6.9 Output from Nested Modules

c d

In Mod5: -8 -12 -2 -2

The variables in the local symbol table of Mod5 are available to pass into Mod6. If Mod6 changes the values
of an argument, those values are also changed in the environment from which Mod6 was called. For the
previous example, this means that the local variable c is modified by Mod6.
If a module has no arguments, it can access variables in the environment from which it is called. For
example, consider the following modules:

x = 123;

start Mod7;
print "In Mod7:" x;
finish;

start Mod8(p);
print "In Mod8:" p;
run Mod7;
finish;

run Mod8(x);

In this example, module Mod7 is called from module Mod8. Therefore, the variables available to Mod7 are
those defined in the scope of Mod8. There is no variable named x in the environment of Mod8. Therefore
an error occurs on the PRINT statement in Mod7, as shown in Figure 6.10. An error would not occur if you
call Mod7 from the main scope, because x is defined at main scope.

Figure 6.10 Error Message When a Variable Is Not Defined in a Module

NOTE: IML Ready


NOTE: Module MOD7 defined.
NOTE: Module MOD8 defined.
ERROR: Matrix x has not been set to a value.

statement : PRINT at line 1550 column 4


traceback : module MOD7 at line 1550 column 4
module MOD8 at line 1555 column 4

NOTE: Paused in module MOD7.

Understanding Argument Passing

You can use expressions and subscripted matrices as arguments to a module, but it is important to understand
the way the SAS/IML software passes the results to the module. Expressions are evaluated, and the evaluated
Understanding Argument Passing F 81

values are stored in temporary variables. Similarly, submatrices are created from subscripted variables and
stored in temporary variables. The temporary variables are passed to the module. In the following example,
notice that the matrix x does not change; you might expect x to contain the squared values of y.

start Square(a,b);
a = b##2;
finish;

x = {. .}; /* initialize with missing values */


y = {3 4};
reset printall; /* print all intermediate results */
do i = 1 to 2; /* pass elements of matrix to modules */
run Square(x[i],y[i]); /* WRONG: x[i] is not changed */
end;
print x; /* show that x is unchanged */

The output is shown in Figure 6.11. The names of the temporary matrices created by the subscript operators
are _TEM1001 and _TEM1002. These are the matrices passed into the square module. The module assigns
the value 9 to the local matrix a, and this value is returned to main scope in the temporary matrix _TEM1001,
which promptly vanishes! The same sequence of operations repeats for the next call to the Square module.

Figure 6.11 Temporary and Local Matrices in a Module

i 1 row 1 col (numeric)

_TEM1002 1 row 1 col (numeric)

_TEM1001 1 row 1 col (numeric)

a 1 row 1 col (numeric)

_TEM1002 1 row 1 col (numeric)

_TEM1001 1 row 1 col (numeric)

a 1 row 1 col (numeric)

16
82 F Chapter 6: Programming Statements

Consequently, the values of x remain unchanged by the previous calls, as shown in Figure 6.12. The lesson
to learn from this example is this: do not pass in an expression or literal as an output argument to a module.
Use only matrix names for output arguments. For example, the correct way to call the Square module is to
eliminate the loop and simply use the statement run Square(x, y);.

Figure 6.12 An Unchanged Matrix

. .

Storing and Loading Modules

You can store and reload modules by using the STORE statement. The STORE statement saves the module
in a storage library. The stored module persists even when you exit PROC IML or exit the SAS System.
After a module is stored, you can use the module in other SAS/IML programs by using the LOAD statement
prior to calling the module. The syntax of the STORE and LOAD statements are as follows:
STORE MODULE= name ;

LOAD MODULE= name ;

You can view the names of the modules in storage with the SHOW statement, as follows:

show storage;

See Chapter 17, Storage Features, for details about using the library storage facilities.

Termination Statements

You can stop execution with a PAUSE, STOP, or ABORT statement. The QUIT statement is also a termi-
nation statement, but it causes the IML procedure to immediately exit. The other termination statements do
not cause PROC IML to exit until the statements are executed. The following sections describe the PAUSE,
STOP, ABORT, and QUIT statements.

PAUSE Statement

The general form of the PAUSE statement is as follows:


PAUSE < message > < * > ;
PAUSE Statement F 83

The PAUSE statement does the following:

 stops execution of a module

 remembers where it stopped

 prints a message that you can specify

 sets the current program environment and symbol table to be that of the module that contains the
PAUSE statement. This means that you can type statements that reference local variables in the
module. For example, you might want to use a PAUSE statement while debugging a module so that
you can print the value of local variables.

A RESUME statement enables you to continue execution at the location of the most recent PAUSE state-
ment.
You can use a STOP statement as an alternative to the RESUME statement to remove the paused state
and to return to the main scope outside the module. You can specify a message in the PAUSE statement.
This message is displayed in the output window when the PAUSE statement is executed. For example, the
following PAUSE statements each display a message:

pause "Please enter an assignment for X, then enter RESUME;";

msg = "Please enter an assignment for X, then enter RESUME;";


pause msg;

The PAUSE statement also writes a note to the SAS log. To suppress the note, use the * option, as shown in
the following statement:

pause *;

When you use a PAUSE, RESUME, STOP, or ABORT statement, keep in mind the following details:

 The PAUSE statement must be used from inside a module.

 It is an error to execute a RESUME statement without any outstanding pauses.

 You can define and execute modules while paused within another module.

 If a run-time error occurs inside a module, a PAUSE statement is automatically executed. This gives
you an opportunity to correct the error and resume execution of the module with a RESUME state-
ment. Alternately, you can submit a STOP statement to exit from the module environment, or an
ABORT statement to exit PROC IML.

 You cannot reenter or redefine an active (paused) module.

 When paused, you can run another module that also pauses. The paused environments are stacked.

 You can put a RESUME statement inside a module. For example, suppose you are paused in module
A and then run module B, which executes a RESUME statement. Execution is resumed in module A
and does not return to module B.

 You can use the PAUSE and RESUME statements in both subroutine and function modules.
84 F Chapter 6: Programming Statements

 If you pause in a subroutine module that has its own symbol table, then the statements executed while
paused use this symbol table. You must use a RESUME or a STOP statement to return to the global
symbol table environment.

 You can use the PAUSE and RESUME statements, in conjunction with the PUSH, QUEUE, and
EXECUTE subroutines described in Chapter 18, Using SAS/IML Software to Generate SAS/IML
Statements, to execute SAS/IML statements that you generate within a module.

STOP Statement

The general form of the STOP statement is as follows:


STOP ;

The STOP statement clears all pauses and returns to the main scope.

ABORT Statement

The general form of the ABORT statement is as follows:


ABORT ;

The ABORT statement stops execution and exits from PROC IML, much like a QUIT statement. The
difference is that the ABORT statement is an executable statement that can be used in IF-THEN statements
and in modules. For example, you might want to exit PROC IML if a certain error occurs. You can check
for the error in a module and execute the ABORT statement if the error occurs.

QUIT Statement

The syntax of the QUIT statement is as follows:


QUIT ;

The QUIT statement stops execution and exits from PROC IML. The QUIT statement is executed as soon
as the statement is parsed. Consequently, you cannot use QUIT in a module or as part of an IF-THEN/ELSE
statement.
Chapter 7

Working with SAS Data Sets

Contents
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Opening a SAS Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Making a SAS Data Set Current . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Displaying SAS Data Set Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Referring to a SAS Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Listing Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Specifying a Range of Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Selecting a Set of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Selecting Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Reading Observations from a SAS Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Using the READ Statement with the VAR Clause . . . . . . . . . . . . . . . . . . . . 96
Using the READ Statement with the VAR and INTO Clauses . . . . . . . . . . . . . 96
Using the READ Statement with the WHERE Clause . . . . . . . . . . . . . . . . . 97
Editing a SAS Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Updating Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Deleting Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Creating a SAS Data Set from a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Using the CREATE Statement with the FROM Option . . . . . . . . . . . . . . . . . 101
Using the CREATE Statement with the VAR Clause . . . . . . . . . . . . . . . . . . 102
Understanding the End-of-File Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Producing Summary Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Sorting a SAS Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Indexing a SAS Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Data Set Maintenance Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Summary of Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Comparison with the SAS DATA Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

Overview

SAS/IML software has many statements for passing data from SAS data sets to matrices and from matrices
to SAS data sets. You can create matrices from the variables and observations of a SAS data set in several
86 F Chapter 7: Working with SAS Data Sets

ways. You can create a column vector for each data set variable, or you can create a matrix where columns
correspond to data set variables. You can use all the observations in a data set or use a subset of them.
You can also create a SAS data set from a matrix. The columns correspond to data set variables and the rows
correspond to observations. Data management commands enable you to edit, append, rename, or delete SAS
data sets from within the SAS/IML environment.
When reading a SAS data set, you can read any number of observations into a matrix either sequentially,
directly by record number, or conditionally according to conditions in a WHERE clause. You can also index
a SAS data set. The indexing capability facilitates retrievals by the indexed variable.
Operations on SAS data sets are performed with straightforward, consistent, and powerful statements. For
example, the LIST statement can perform the following tasks:

 list the next record

 list a specified record

 list any number of specified records

 list the whole file

 list records satisfying one or more conditions

 list specified variables or all variables

If you want to read values into a matrix, use the READ statement instead of the LIST statement with the
same operands and features as the LIST statement. You can specify operands that control which records and
variables are used indirectly, as matrices, so that you can dynamically program the records, variables, and
conditional values you want.
In this chapter, you use the SAS data set CLASS, which contains the variables NAME, SEX, AGE, HEIGHT,
and WEIGHT, to learn about the following:

 opening a SAS data set

 examining the contents of a SAS data set

 displaying data values with the LIST statement

 reading observations from a SAS data set into matrices

 editing a SAS data set

 creating a SAS data set from a matrix

 displaying matrices with row and column headings

 producing summary statistics

 sorting a SAS data set

 indexing a SAS data set


Opening a SAS Data Set F 87

 similarities and differences between the data set and the SAS DATA step

Throughout this chapter, the right angle brackets (>) indicate statements that you submit; responses from
Interactive Matrix Language follow.
First, invoke the IML procedure by using the following statement:

> proc iml;

IML Ready

Opening a SAS Data Set

Before you can access a SAS data set, you must first submit a command to open it. There are three ways to
open a SAS data set:

 To simply read from an existing data set, submit a USE statement to open it for Read access. The
general form of the USE statement is as follows:
USE SAS-data-set < VAR operand > < WHERE(expression) > ;
With Read access, you can use the FIND, INDEX, LIST, and READ statements with the data set.

 To read and write to an existing data set, use the EDIT statement. The general form of the EDIT
statement is as follows:
EDIT SAS-data-set < VAR operand > < WHERE(expression) > ;
This statement enables you to use both the reading statements (LIST, READ, INDEX, and FIND) and
the writing statements (REPLACE, APPEND, DELETE, and PURGE).

 To create a new data set, use the CREATE statement to open a new data set for both output and input.
The general form of the CREATE statement is as follows:
CREATE SAS-data-set < VAR operand > ;
CREATE SAS-data-set FROM from-name ;
< [COLNAME=column-name ROWNAME=row-name] > ; ;
Use the APPEND statement to place the matrix data into the newly created data set. If you do not use
the APPEND statement, the new data set has no observations.

If you want to list observations and create matrices from the data in the SAS data set named CLASS, you
must first submit a statement to open the CLASS data set. Because CLASS already exists, specify the USE
statement.
88 F Chapter 7: Working with SAS Data Sets

Making a SAS Data Set Current

IML data processing commands work on the current data set. This feature makes it unnecessary for you
to specify the data set as an operand each time. There are two current data sets, one for input and one for
output. IML makes a data set the current one as it is opened. You can also make a data set current by using
two setting statements, SETIN and SETOUT:

 The USE and SETIN statements make a data set current for input.

 The SETOUT statement makes a data set current for output.

 The CREATE and EDIT statements make a data set current for both input and output.

If you issue a USE, EDIT, or CREATE statement for a data set that is already open, the data set is made the
current data set. To find out which data sets are open and which are current input and current output data
sets, use the SHOW DATASETS statement.
The current observation is set by the last operation that performed input/output (I/O). If you want to set the
current observation without doing any I/O, use the SETIN (or SETOUT) statement with the POINT option.
After a data set is opened, the current observation is set to 0. If you attempt to list or read the current
observation, the current observation is converted to 1. You can make the CLASS data set current for input
and position the pointer at the 10th observation with the following statement:

> setin class point 10;

Displaying SAS Data Set Information

You can use SHOW statements to display information about your SAS data sets. The SHOW DATASETS
statement lists all open SAS data sets and their status. The SHOW CONTENTS statement displays the
variable names and types, the size, and the number of observations in the current input data set. For example,
to get information for the CLASS data set, issue the following statements:

> use class;


> show datasets;

LIBNAME MEMNAME OPEN MODE STATUS


------- ------- --------- ------
WORK .CLASS Input Current Input

> show contents;

VAR NAME TYPE SIZE


NAME CHAR 8
SEX CHAR 8
Referring to a SAS Data Set F 89

AGE NUM 8
HEIGHT NUM 8
WEIGHT NUM 8
Number of Variables: 5
Number of Observations: 19

As you can see, CLASS is the only data set open. The USE statement opens it for input, and it is the current
input data set. The full name for CLASS is WORK.CLASS. The libref is the default, WORK. The next
section tells you how to change the libref to another name.

Referring to a SAS Data Set

The USE, EDIT, and CREATE statements take as their first operand the data set name. This name can have
either one or two levels. If it is a two-level name, the first level refers to the name of the SAS data library; the
second name is the data set name. If the libref is WORK, the data set is stored in a directory for temporary
data sets; these are automatically deleted at the end of the session. Other librefs are associated with SAS
data libraries by using the LIBNAME statement.
If you specify only a single name, then IML supplies a default libref. At the beginning of an IML session,
the default libref is SASUSER if SASUSER is defined as a libref or WORK otherwise. You can reset the
default libref by using the RESET DEFLIB statement. If you want to create a permanent SAS data set, you
must specify a two-level name by using the RESET DEFLIB statement (see the chapter on SAS files in SAS
Language Reference: Concepts for more information about permanent SAS data sets):
> reset deflib=name;

Listing Observations

You can list variables and observations in a SAS data set with the LIST statement. The general form of the
LIST statement is as follows:
LIST < range > < VAR operand > < WHERE(expression) > ;

where

range specifies a range of observations.


operand selects a set of variables.
expression is an expression that is evaluated as being true or false.

The next three sections discuss how to use each of these clauses with the CLASS data set.
90 F Chapter 7: Working with SAS Data Sets

Specifying a Range of Observations

You can specify a range of observations with a keyword or by record number by using the POINT option.
You can use the range operand with the data management statements DELETE, FIND, LIST, READ, and
REPLACE.
You can specify range with any of the following keywords:

ALL specifies all observations.


CURRENT specifies the current observation.
NEXT < number > specifies the next observation or next number of observations.
AFTER specifies all observations after the current one.
POINT operand specifies observations by number, where operand can be one of the following:

Operand Example
a single record number point 5
a literal giving several record numbers point {2 5 10}
the name of a matrix that contains record numbers point p
an expression in parentheses point (p+1)

If you want to list all observations in the CLASS data set, use the keyword ALL to indicate that the range is
all observations. The following example demonstrates the use of this keyword:

> list all;

OBS NAME SEX AGE HEIGHT WEIGHT


------ -------- -------- --------- --------- ---------
1 JOYCE F 11.0000 51.3000 50.5000
2 THOMAS M 11.0000 57.5000 85.0000
3 JAMES M 12.0000 57.3000 83.0000
4 JANE F 12.0000 59.8000 84.5000
5 JOHN M 12.0000 59.0000 99.5000
6 LOUISE F 12.0000 56.3000 77.0000
7 ROBERT M 12.0000 64.8000 128.0000
8 ALICE F 13.0000 56.5000 84.0000
9 BARBARA F 13.0000 65.3000 98.0000
10 JEFFREY M 13.0000 62.5000 84.0000
11 CAROL F 14.0000 62.8000 102.5000
12 HENRY M 14.0000 63.5000 102.5000
13 ALFRED M 14.0000 69.0000 112.5000
14 JUDY F 14.0000 64.3000 90.0000
15 JANET F 15.0000 62.5000 112.5000
16 MARY F 15.0000 66.5000 112.0000
17 RONALD M 15.0000 67.0000 133.0000
18 WILLIAM M 15.0000 66.5000 112.0000
19 PHILIP M 16.0000 72.0000 150.0000
Specifying a Range of Observations F 91

Without a range specification, the LIST statement lists only the current observation, which in this example
is now the last observation because of the previous LIST statement. Here is the result of using the LIST
statement:

> list;

OBS NAME SEX AGE HEIGHT WEIGHT


------ -------- -------- --------- --------- ---------
19 PHILIP M 16.0000 72.0000 150.0000

Use the POINT keyword with record numbers to list specific observations. You can follow the keyword
POINT with a single record number or with a literal giving several record numbers. Here are two examples:

> list point 5;

OBS NAME SEX AGE HEIGHT WEIGHT


------ -------- -------- --------- --------- ---------
5 JOHN M 12.0000 59.0000 99.5000

> list point {2 4 9};

OBS NAME SEX AGE HEIGHT WEIGHT


------ -------- -------- --------- --------- ---------
2 THOMAS M 11.0000 57.5000 85.0000
4 JANE F 12.0000 59.8000 84.5000
9 BARBARA F 13.0000 65.3000 98.0000

You can also indicate the range indirectly by creating a matrix that contains the records you want to list, as
in the following example:

> p={2 4 9};


> list point p;

OBS NAME SEX AGE HEIGHT WEIGHT


------ -------- -------- --------- --------- ---------
2 THOMAS M 11.0000 57.5000 85.0000
4 JANE F 12.0000 59.8000 84.5000
9 BARBARA F 13.0000 65.3000 98.0000

The range operand is usually listed first when you are using the access statements DELETE, FIND, LIST,
READ, and REPLACE. The following table shows access statements and their default ranges:

Statement Default Range


LIST current
READ current
FIND all
REPLACE current
APPEND always at end
DELETE current
92 F Chapter 7: Working with SAS Data Sets

Selecting a Set of Variables

You can use the VAR clause to select a set of variables. The general form of the VAR clause is as follows:
VAR operand ;

where operand can be specified by using one of the following items:

 a literal that contains variable names

 the name of a matrix that contains variable names

 an expression in parentheses yielding variable names

 one of the following keywords:


_ALL_ for all variables
_CHAR_ for all character variables
_NUM_ for all numeric variables

The following examples show all possible ways you can use the VAR clause:

var {time1 time5 time9}; /* a literal giving the variables */


var time; /* a matrix that contains the names */
var('time1':'time9'); /* an expression */
var _all_; /* a keyword */

For example, to list students names from the CLASS data set, use the VAR clause with a literal, as in the
following statement:

> list point p var{name};

OBS NAME
------ --------
2 THOMAS
4 JANE
9 BARBARA

To list AGE, HEIGHT, and WEIGHT, you can use the VAR clause with a matrix giving the variables, as in
the following statements:

> v={age height weight};


> list point p var v;

OBS AGE HEIGHT WEIGHT


------ --------- --------- ---------
2 11.0000 57.5000 85.0000
4 12.0000 59.8000 84.5000
9 13.0000 65.3000 98.0000
Selecting Observations F 93

The VAR clause can be used with the following statements for the tasks described:

Statement VAR Clause Function


APPEND specifies which IML variables contain data to append to the data set
CREATE specifies the variables to go in the data set
EDIT limits which variables are accessed
LIST specifies which variables to list
READ specifies which variables to read
REPLACE specifies which data set variables data values to replace with corre-
sponding IML variable data values
USE limits which variables are accessed

Selecting Observations

The WHERE clause conditionally selects observations, within the range specification, according to condi-
tions given in the expression. The general form of the WHERE clause is as follows:
WHERE variable comparison-op operand ;

where

variable is a variable in the SAS data set.


comparison-op is one of the following comparison operators:
< less than
<= less than or equal to
= equal to
> greater than
>= greater than or equal to
= not equal to
? contains a given string
? does not contain a given string
=: begins with a given string
=* sounds like or is spelled like a given string
operand is a literal value, a matrix name, or an expression in parentheses.

WHERE comparison arguments can be matrices. For the following operators, the WHERE clause succeeds
if all the elements in the matrix satisfy the condition:

= ? < <= > >=

For the following operators, the WHERE clause succeeds if any of the elements in the matrix satisfy the
condition:
94 F Chapter 7: Working with SAS Data Sets

= ? =: =*

Logical expressions can be specified within the WHERE clause by using the AND (&) and OR (|) operators.
The general form is as follows:
clause&clause (for an AND clause)
clause | clause (for an OR clause)
where clause can be a comparison, a parenthesized clause, or a logical expression clause that is evaluated
by using operator precedence.
For example, to list the names of all males in the data set CLASS, use the following statement:

> list all var{name} where(sex='M');

OBS NAME
------ ----------------------
2 THOMAS
3 JAMES
5 JOHN
7 ROBERT
10 JEFFREY
12 HENRY
13 ALFRED
17 RONALD
18 WILLIAM
19 PHILIP

The WHERE comparison arguments can be matrices. In the following cases that use the =* operator, the
comparison is made to each name to find a string that sounds like or is spelled like the given string or strings:

> n={name sex age};


> list all var n where(name=*{"ALFRED","CAROL","JUDY"});

OBS NAME SEX AGE


----- ---------------- -------- ---------
11 CAROL F 14.0000
13 ALFRED M 14.0000
14 JUDY F 14.0000

> list all var n where(name=*{"JON","JAN"});

OBS NAME SEX AGE


------ -------- -------- ---------
4 JANE F 12.0000
5 JOHN M 12.0000

To list AGE, HEIGHT, and WEIGHT for all students in their teens, use the following statement:

> list all var v where(age>12);


Reading Observations from a SAS Data Set F 95

OBS AGE HEIGHT WEIGHT


------ --------- --------- ---------
8 13.0000 56.5000 84.0000
9 13.0000 65.3000 98.0000
10 13.0000 62.5000 84.0000
11 14.0000 62.8000 102.5000
12 14.0000 63.5000 102.5000
13 14.0000 69.0000 112.5000
14 14.0000 64.3000 90.0000
15 15.0000 62.5000 112.5000
16 15.0000 66.5000 112.0000
17 15.0000 67.0000 133.0000
18 15.0000 66.5000 112.0000
19 16.0000 72.0000 150.0000

N OTE : In the WHERE clause, the expression on the left side refers to values of the data set variables, and
the expression on the right side refers to matrix values. You cannot use comparisons that involve more than
one data set variable in a single comparison; for example, you cannot use either of the following expressions:

list all where(height>weight);


list all where(weight-height>0);

You could use the first statement if WEIGHT were a matrix name already defined rather than a variable in
the SAS data set.

Reading Observations from a SAS Data Set

Transferring data from a SAS data set to a matrix is done by using the READ statement. The SAS data
set you want to read data from must already be open. You can open a SAS data set with either the USE
or the EDIT statement. If you already have several data sets open, you can point to the one you want with
the SETIN statement, making it the current input data set. The general form of the READ statement is as
follows:
READ < range > < VAR operand > < WHERE(expression) > ;

< INTO name > ;

where

range specifies a range of observations.


operand selects a set of variables.
expression is an expression that is evaluated as being true or false.
name names a target matrix for the data.
96 F Chapter 7: Working with SAS Data Sets

Using the READ Statement with the VAR Clause

Use the READ statement with the VAR clause to read variables from the current SAS data set into column
vectors of the VAR clause. Each variable in the VAR clause becomes a column vector with the same name
as the variable in the SAS data set. The number of rows is equal to the number of observations processed,
depending on the range specification and the WHERE clause. For example, to read the numeric variables
AGE, HEIGHT, and WEIGHT for all observations in the CLASS data set, use the following statements:

> read all var {age height weight};

Now use the SHOW NAMES statement to display all the matrices you have created so far in this chapter:

> show names;

AGE 19 rows 1 col num 8


HEIGHT 19 rows 1 col num 8
N 1 row 3 cols char 4
P 1 row 3 cols num 8
V 1 row 3 cols char 6
WEIGHT 19 rows 1 col num 8
Number of symbols = 8 (includes those without values)

You see that, with the READ statement, you have created the three numeric vectors AGE, HEIGHT, and
WEIGHT. (Notice that the matrices you created earlier, N, P, and V, are also listed.) You can select the
variables that you want to access with a VAR clause in the USE statement. The two previous statements can
also be written as follows:

use class var{age height weight};


read all;

Using the READ Statement with the VAR and INTO Clauses

Sometimes you want to have all of the numeric variables in the same matrix so that you can determine
correlations. Use the READ statement with the INTO clause and the VAR clause to read the variables
listed in the VAR clause into the single matrix named in the INTO clause. Each variable in the VAR clause
becomes a column of the target matrix. If there are p variables in the VAR clause and n observations are
processed, the target matrix in the INTO clause is an n  p matrix.
The following statement creates a matrix X that contains the numeric variables of the CLASS data set.
Notice the use of the keyword _NUM_ in the VAR clause to specify that all numeric variables be read.
Using the READ Statement with the WHERE Clause F 97

> read all var _num_ into x;


> print x;

X
11 51.3 50.5
11 57.5 85
12 57.3 83
12 59.8 84.5
12 59 99.5
12 56.3 77
12 64.8 128
13 56.5 84
13 65.3 98
13 62.5 84
14 62.8 102.5
14 63.5 102.5
14 69 112.5
14 64.3 90
15 62.5 112.5
15 66.5 112
15 67 133
15 66.5 112
16 72 150

Using the READ Statement with the WHERE Clause

Use the WHERE clause as you did with the LIST statement, to conditionally select observations from within
the specified range. If you want to create a matrix FEMALE that contains the variables AGE, HEIGHT,
and WEIGHT for females only, use the following statements:

> read all var _num_ into female where(sex="F");


> print female;

FEMALE
11 51.3 50.5
12 59.8 84.5
12 56.3 77
13 56.5 84
13 65.3 98
14 62.8 102.5
14 64.3 90
15 62.5 112.5
15 66.5 112

Now try some special features of the WHERE clause to find values that begin with certain characters (the
=: operator) or that contain certain strings (the ? operator). To create a matrix J that contains the students
whose names begin with the letter J, use the following statements:

> read all var{name} into j where(name=:"J");


98 F Chapter 7: Working with SAS Data Sets

> print j;

J
JOYCE
JAMES
JANE
JOHN
JEFFREY
JUDY
JANET

To create a matrix AL of children with names that contains the string AL, use the following statement:

> read all var{name} into al where(name?"AL");


> print al;

AL
ALICE
ALFRED
RONALD

Editing a SAS Data Set

You can edit a SAS data set by using the EDIT statement. You can update values of variables, mark obser-
vations for deletion, delete the marked observations, and save the changes you make. The general form of
the EDIT statement is as follows:
EDIT SAS-data-set < VAR operand > < WHERE(expression) > ;

where

SAS-data-set names an existing SAS data set.


operand selects a set of variables.
expression is an expression that is evaluated as being true or false.

Updating Observations

Suppose you have updated data and want to change some values in the CLASS data set. For instance,
suppose the student named Henry has had a birthday since the data were added to the CLASS data set. You
can do the following:

 make the CLASS data set current for input and output

 read the data


Deleting Observations F 99

 change the appropriate data value

 replace the changed data in the data set

First, submit an EDIT statement to make the CLASS data set current for input and output. Then use the
FIND statement, which finds observation numbers and stores them in a matrix, to find the observation
number of the data for Henry and store it in the matrix d. Here are the statements:

> edit class;


> find all where(name={'HENRY'}) into d;
> print d;

D
12

The following statement lists the observation that contains the data for Henry:

> list point d;

OBS NAME SEX AGE HEIGHT WEIGHT


------ -------- -------- --------- --------- ---------
12 HENRY M 14.0000 63.5000 102.5000

As you see, the observation number is 12. Now read the value for AGE into a matrix and update its value.
Finally, replace the value in the CLASS data set and list the observation that contains the data for Henry
again. Here are the statements:

> age=15;
> replace;

1 observations replaced.

> list point 12;

OBS NAME SEX AGE HEIGHT WEIGHT


------ -------- -------- --------- --------- ---------
12 HENRY M 15.0000 63.5000 102.5000

Deleting Observations

Use the DELETE statement to mark an observation to be deleted. The general form of the DELETE state-
ment is as follows:
DELETE < range > < WHERE(expression) > ;

where

range specifies a range of observations.


100 F Chapter 7: Working with SAS Data Sets

expression is an expression that is evaluated as being true or false.

The following are examples of valid uses of the DELETE statement:

Statement Description
delete; deletes the current observation
delete point 10; deletes observation 10
delete all where (age>12); deletes all observations where
AGE is greater than 12

If a file accumulates a number of observations marked as deleted, you can clean out these observations and
renumber the remaining observations by using the PURGE statement.
Suppose the student named John has moved and you want to update the CLASS data set. You can remove
the observation by using the EDIT and DELETE statements. First, find the observation number of the data
for John and store it in the matrix d by using the FIND statement. Then submit a DELETE statement to
mark the record for deletion. A deleted observation is still physically in the file and still has an observation
number, but it is excluded from processing. The deleted observations appear as gaps when you list the file
by observation number, as in the following example:

> find all where(name={'JOHN'}) into d;


> print d;

D
5

> delete point d;

1 observation deleted.
> list all;

OBS NAME SEX AGE HEIGHT WEIGHT


------ -------- -------- --------- --------- ---------
1 JOYCE F 11.0000 51.3000 50.5000
2 THOMAS M 11.0000 57.5000 85.0000
3 JAMES M 12.0000 57.3000 83.0000
4 JANE F 12.0000 59.8000 84.5000
6 LOUISE F 12.0000 56.3000 77.0000
7 ROBERT M 12.0000 64.8000 128.0000
8 ALICE F 13.0000 56.5000 84.0000
9 BARBARA F 13.0000 65.3000 98.0000
10 JEFFREY M 13.0000 62.5000 84.0000
11 CAROL F 14.0000 62.8000 102.5000
12 HENRY M 15.0000 63.5000 102.5000
13 ALFRED M 14.0000 69.0000 112.5000
14 JUDY F 14.0000 64.3000 90.0000
15 JANET F 15.0000 62.5000 112.5000
16 MARY F 15.0000 66.5000 112.0000
17 RONALD M 15.0000 67.0000 133.0000
18 WILLIAM M 15.0000 66.5000 112.0000
19 PHILIP M 16.0000 72.0000 150.0000
Creating a SAS Data Set from a Matrix F 101

Notice that there is a gap in the data where the deleted observation was (observation 5). To renumber the
observations and close the gaps, submit the PURGE statement. Note that the PURGE statement deletes any
indexes associated with a data set. Here is the statement:

> purge;

Creating a SAS Data Set from a Matrix

SAS/IML software provides the capability to create a new SAS data set from a matrix. You can use the
CREATE and APPEND statements to create a SAS data set from a matrix, where the columns of the matrix
become the data set variables and the rows of the matrix become the observations. Thus, an n  m matrix
produces a SAS data set with m variables and n observations. The CREATE statement opens the new SAS
data set for both input and output, and the APPEND statement writes to (outputs to) the data set.

Using the CREATE Statement with the FROM Option

You can create a SAS data set from a matrix by using the CREATE statement with the FROM option. This
form of the CREATE statement is as follows:
CREATE SAS-data-set FROM matrix ;

< [COLNAME=column-name ROWNAME=row-name] > ;

where

SAS-data-set names the new data set.


matrix names the matrix that contains the data.
column-name names the variables in the data set.
row-name adds a variable that contains row titles to the data set.

Suppose you want to create a SAS data set named RATIO that contains a variable with the height-to-weight
ratios for each student. You first create a matrix that contains the ratios from the matrices HEIGHT and
WEIGHT that you have already defined. Next, use the CREATE and APPEND statements to open a new
SAS data set called RATIO and append the observations, naming the data set variable HTWT instead of
COL1.

htwt=height/weight;
create ratio from htwt[colname='htwt'];
append from htwt;

Now submit the SHOW DATASETS and SHOW CONTENTS statements:


102 F Chapter 7: Working with SAS Data Sets

> show datasets;

LIBNAME MEMNAME OPEN MODE STATUS


------- ------- --------- ------
WORK .CLASS Update
WORK .RATIO Update Current Input Current Output

> show contents;

VAR NAME TYPE SIZE


HTWT NUM 8
Number of Variables: 1
Number of Observations: 18

> close ratio;

As you can see, the new SAS data set RATIO has been created. It has 18 observations and 1 variable (recall
that you deleted 1 observation earlier).

Using the CREATE Statement with the VAR Clause

You can use a VAR clause with the CREATE statement to select the variables you want to include in the
new data set. In the previous example, the new data set RATIO had one variable. If you want to create a
similar data set but include the second variable NAME, you use the VAR clause. You could not do this with
the FROM option because the variable HTWT is numeric and the variable NAME is character. The following
statements create a new data set RATIO2 having the variables NAME and HTWT:

> create ratio2 var{name htwt};


> append;
> show contents;

VAR NAME TYPE SIZE


NAME CHAR 8
HTWT NUM 8
Number of Variables: 2
Number of Observations: 18

> close ratio2;

Notice that now the variable NAME is in the data set.


Understanding the End-of-File Condition F 103

Understanding the End-of-File Condition

If you try to read past the end of a data set or point to an observation greater than the number of observations
in the data set, you create an end-of-file condition. If an end-of-file condition occurs inside a DO DATA
iteration group, IML transfers control to the next statement outside the current DO DATA group.
The following example uses a DO DATA loop while reading the CLASS data set. It reads the variable
WEIGHT in one observation at a time and accumulates the weights of the students in the IML matrix SUM.
When the data are read, the total class weight is stored in the matrix SUM.

setin class point 0;


sum=0;
do data;
read next var{weight};
sum=sum+weight;
end;
print sum;

Producing Summary Statistics

Summary statistics on the numeric variables of a SAS data set can be obtained with the SUMMARY state-
ment. These statistics can be based on subgroups of the data by using the CLASS clause in the SUMMARY
statement. The SAVE option in the OPT clause enables you to save the computed statistics in matrices for
later perusal. For example, consider the following statement.

> summary var {height weight} class {sex} stat{mean std} opt{save};

SEX Nobs Variable MEAN STD


------------------------------------------------
F 9 HEIGHT 60.58889 5.01833
WEIGHT 90.11111 19.38391

M 9 HEIGHT 64.45556 4.90742


WEIGHT 110.00000 23.84717

All 18
HEIGHT 62.52222 5.20978
WEIGHT 100.05556 23.43382
------------------------------------------------

This summary statement gives the mean and standard deviation of the variables HEIGHT and WEIGHT for
the two subgroups (male and female) of the data set CLASS. Since the SAVE option is set, the statistics of the
variables are stored in matrices under the name of the corresponding variables: each column corresponds to
a statistic and each row corresponds to a subgroup. Two other vectors, SEX and _NOBS_, are created. The
104 F Chapter 7: Working with SAS Data Sets

vector SEX contains the two distinct values of the CLASS variable SEX used in forming the two subgroups.
The vector _NOBS_ has the number of observations in each subgroup.
Note that the combined means and standard deviations of the two subgroups are displayed but not saved.
More than one CLASS variable can be used, in which case a subgroup is defined by the combination of the
values of the CLASS variables.

Sorting a SAS Data Set

The observations in a SAS data set can be ordered (sorted) by specific key variables. To sort a SAS data set,
close the data set if it is currently open, and issue a SORT statement for the variables by which you want the
observations to be ordered. Specify an output data set name if you want to keep the original data set. For
example, the following statement creates a new SAS data set named SORTED:

> sort class out=sorted by name;

The new data set has the observations from the data set CLASS, ordered by the variable NAME.
The following statement sorts in place the data set CLASS by the variable NAME:

> sort class by name;

However, when the SORT statement is finished executing, the original data set is replaced by the sorted data
set.
You can specify as many key variables as needed, and, optionally, each variable can be preceded by the
keyword DESCENDING, which denotes that the variable that follows is to be sorted in descending order.

Indexing a SAS Data Set

Searching through a large data set for information about one or more specific observations can take a long
time because the procedure must read each record. You can reduce this search time by first indexing the data
set by a variable. The INDEX statement builds a special companion file that contains the values and record
numbers of the indexed variables. Once the index is built, IML can use the index for queries with WHERE
clauses if it decides that indexed retrieval is more efficient. Any number of variables can be indexed, but
only one index is in use at a given time. Note that purging a data set with the PURGE statement results in
the loss of all associated indexes.
Once you have indexed a data set, IML can use this index whenever a search is conducted with respect
to the indexed variables. The indexes are updated automatically whenever you change values in indexed
variables. When an index is in use, observations cannot be randomly accessed by their physical location
Data Set Maintenance Functions F 105

numbers. This means that the POINT range cannot be used when an index is in effect. However, if you
purge the observations marked for deletion, or sort the data set in place, the indexes become invalid and
IML automatically deletes them.
For example, if you want a list of all female students in the CLASS data set, you can first index CLASS
by the variable SEX. Then use the LIST statement with a WHERE clause. Of course, the CLASS data set
is small, and indexing does little if anything to speed queries with the WHERE clause. If the data set had
thousands of students, though, indexing could save search time.
To index the data set by the variable SEX, submit the following statement:

> index sex;

NOTE: Variable SEX indexed.


NOTE: Retrieval by SEX.

Now list all students by using the following statement. Notice the ordering of the special file built by
indexing by the variable SEX. Retrievals by SEX will be quick.

> list all;

OBS NAME SEX AGE HEIGHT WEIGHT


------ -------- -------- --------- --------- ---------
1 JOYCE F 11.0000 51.3000 50.5000
4 JANE F 12.0000 59.8000 84.5000
6 LOUISE F 12.0000 56.3000 77.0000
8 ALICE F 13.0000 56.5000 84.0000
9 BARBARA F 13.0000 65.3000 98.0000
11 CAROL F 14.0000 62.8000 102.5000
14 JUDY F 14.0000 64.3000 90.0000
15 JANET F 15.0000 62.5000 112.5000
16 MARY F 15.0000 66.5000 112.0000
2 THOMAS M 11.0000 57.5000 85.0000
3 JAMES M 12.0000 57.3000 83.0000
7 ROBERT M 12.0000 64.8000 128.0000
10 JEFFREY M 13.0000 62.5000 84.0000
12 HENRY M 15.0000 63.5000 102.5000
13 ALFRED M 14.0000 69.0000 112.5000
17 RONALD M 15.0000 67.0000 133.0000
18 WILLIAM M 15.0000 66.5000 112.0000
19 PHILIP M 16.0000 72.0000 150.0000

Data Set Maintenance Functions

Two functions and two subroutines are provided to perform data set maintenance:

DATASETS function obtains members in a data library. This function returns a character matrix that
contains the names of the SAS data sets in a library.
106 F Chapter 7: Working with SAS Data Sets

CONTENTS function obtains variables in a member. This function returns a character matrix that con-
tains the variable names for the SAS data set specified by libname and memname.
The variable list is returned in alphabetical order.
RENAME subroutine renames a SAS data set member in a specified library.
DELETE subroutine deletes a SAS data set member in a specified library.

See Chapter 23 for details and examples of these functions and routines.

Summary of Commands

You have seen that IML has an extensive set of commands that operate on SAS data sets. Table 7.1 sum-
marizes the data management commands you can use to perform management tasks for which you might
normally use the SAS DATA step.

Table 7.1 Data Management Commands


Command Description
APPEND adds observations to the end of a SAS data set
CLOSE closes a SAS data set
CREATE creates and opens a new SAS data set for input and output
DELETE marks observations for deletion in a SAS data set
EDIT opens an existing SAS data set for input and output
FIND finds observations
INDEX indexes variables in a SAS data set
LIST lists observations
PURGE purges all deleted observations from a SAS data set
READ reads observations into IML variables
REPLACE writes observations back into a SAS data set
RESET DEFLIB names default libname
SAVE saves changes and reopens a SAS data set
SETIN selects an open SAS data set for input
SETOUT selects an open SAS data set for output
SHOW CONTENTS shows contents of the current input SAS data set
SHOW DATASETS shows SAS data sets currently open
SORT sorts a SAS data set
SUMMARY produces summary statistics for numeric variables
USE opens an existing SAS data set for input
Comparison with the SAS DATA Step F 107

Comparison with the SAS DATA Step

If you want to remain in the IML environment and mimic DATA step processing, you need to learn the basic
differences between IML and the SAS DATA step:

 With SAS/IML software, you start with a CREATE statement instead of a DATA statement. You
must explicitly set up all your variables with the correct attributes before you create a data set. This
means that you must define character variables to have the desired string length beforehand. Numeric
variables are the default, so any variable not defined as character is assumed to be numeric. In the
DATA step, the variable attributes are determined from context across the whole step.

 With SAS/IML software, you must use an APPEND statement to output an observation; in the DATA
step, you either use an OUTPUT statement or let the DATA step output it automatically.

 With SAS/IML software, you iterate with a DO DATA loop. In the DATA step, the iterations are
implied.

 With SAS/IML software, you have to close the data set with a CLOSE statement unless you plan to
exit the IML environment with a QUIT statement. The DATA step closes the data set automatically
at the end of the step.

 The DATA step usually executes faster than IML.

In short, the DATA step treats the problem with greater simplicity, allowing shorter programs. However,
IML has more flexibility because it is interactive and has a powerful matrix-handling capability.

Summary

In this chapter, you have learned many ways to interact with SAS data sets from within the IML environment.
You learned how to open and close a SAS data set, how to make it current for input and output, how to list
observations by specifying a range of observations to process, a set of variables to use, and a condition for
subsetting observations. You also learned summary statistics. You also know how to read observations and
variables from a SAS data set into matrices as well as create a SAS data set from a matrix of values.
108
Chapter 8

File Access

Contents
Overview of File Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Referring to an External File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Types of External Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Reading from an External File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Using the INFILE Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Using the INPUT Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Writing to an External File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Using the FILE Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Using the PUT Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Listing Your External Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Closing an External File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

Overview of File Access

In this chapter you learn about external files and how to refer to an external file, whether it is a text file or a
binary file. You learn how to read data from a file by using the INFILE and INPUT statements and how to
write data to an external file by using the FILE and PUT statements.
With external files, you must know the format in which the data are stored or to be written. This is in contrast
to SAS data sets, which are specialized files with a structure that is already known to the SAS System.
The SAS/IML statements used to access files are very similar to the corresponding statements in the SAS
DATA step. The following table summarizes the IML statements and their functions.

Statement Function
CLOSEFILE closes an external file
FILE opens an external file for output
INFILE opens an external file for input
INPUT reads from the current input file
PUT writes to the current output file
SHOW: FILES Shows all open files, their attributes, and their status
(current input and output files)
110 F Chapter 8: File Access

Referring to an External File

Suppose that you have data for students in a class. You have recorded the values for the variables NAME,
SEX, AGE, HEIGHT, and WEIGHT for each student and have stored the data in an external text file named
USER.TEXT.CLASS. If you want to read these data into IML variables, you need to indicate where the data
are stored. In other words, you need to name the input file. If you want to write data from matrices to a file,
you also need to name an output file.
There are two ways to refer to an input or output file: a pathname and a filename. A pathname is the name
of the file as it is known to the operating system. A filename is an indirect SAS reference to the file made
by using the FILENAME statement. You can identify a file in either way by using the FILE and INFILE
statements.
For example, you can refer to the input file where the class data are stored by using a literal pathnamethat
is, a quoted string. The following statement opens the file USER.TEXT.CLASS for input:

infile 'user.text.class';

Similarly, if you want to output data to the file USER.TEXT.NEWCLASS, you need to reference the output
file with the following statement:

file 'user.text.newclass';

You can also refer to external files by using a filename. When using a filename as the operand, simply give
the name. The name must be one already associated with a pathname by a previously issued FILENAME
statement.
For example, suppose you want to reference the file with the class data by using a FILENAME statement.
First, you must associate the pathname with an alias (called a fileref ), such as INCLASS. Then you can refer
to USER.TEXT.CLASS with the fileref INCLASS.
The following statements achieve the same result as the previous INFILE statement with the quoted path-
name:

filename inclass 'user.text.class';


infile inclass;

You can use the same technique for output files. The following statements have the same effect as the
previous FILE statement:

filename outclass 'user.text.newclass';


file outclass;

Three filenames have special meaning to IML: CARDS, LOG, and PRINT. These refer to the standard input
and output streams for all SAS sessions, as follows:
Types of External Files F 111

CARDS is a special filename for instream input data.


LOG is a special filename for log output.
PRINT is a special filename for standard print output.

When the pathname is specified, there is a limit of 64 characters to the operand.

Types of External Files

Most files that you work with are text files, which means that they can be edited and displayed without any
special program. Text files under most host environments have special characters, called carriage-control
characters or end-of-line characters, to separate one record from the next.
If your file does not adhere to these conventions, it is called a binary file. Typically, binary files do not have
the usual record separators, and they can use any binary codes, including unprintable control characters.
If you want to read a binary file, you must specify RECFM=N in the INFILE statement and use the byte
operand (<) in the INPUT statement to specify the length of each item you want read. Treating a file as
binary enables you to have direct access to a file position by byte address by using the byte operand (>) in
the INPUT or PUT statement.
You write data to an external file by using the FILE and PUT statements. The output file can be text or
binary. If your output file is binary, you must specify RECFM=N in the FILE statement. One difference
between binary files and text files in output is that with binary files, the PUT statement does not put the
record-separator characters at the end of each record written.

Reading from an External File

After you have chosen a method to refer to the external file you want to read, you need an INFILE statement
to open it for input and an INPUT statement to tell IML how to read the data.
The next several sections cover how to use an INFILE statement and how to specify an INPUT statement so
that you can input data from an external file.

Using the INFILE Statement

An INFILE statement identifies an external file that contains data that you want to read. It opens the file
for input or, if the file is already open, makes it the current input file. This means that subsequent INPUT
statements are read from this file until another file is made the current input file.
The following options can be used with the INFILE statement:
112 F Chapter 8: File Access

FLOWOVER
enables the INPUT statement to go to the next record to obtain values for the variables.

LENGTH=variable
names a variable that contains the length of the current record, where the value is set to the number of
bytes used after each INPUT statement.

MISSOVER
prevents reading from the next input record when an INPUT statement reaches the end of the current
record without finding values for all variables. It assigns missing values to all values that are expected
but not found.

RECFM=N
specifies that the file is to be read in as a pure binary file rather than as a file with record-separator
characters. You must use the byte operands (< and >) to get new records rather than separate INPUT
statements or the new line operator (/).

STOPOVER
stops reading when an INPUT statement reaches the end of the current record without finding val-
ues for all variables in the statement. It treats going past the end of a record as an error condition,
triggering an end-of-file condition. The STOPOVER option is the default.

The FLOWOVER, MISSOVER, and STOPOVER options control how the INPUT statement works when
you try to read past the end of a record. You can specify only one of these options. Read these options
carefully so that you understand them completely.
The following example uses the INFILE statement with a FILENAME statement to read the class data file.
The MISSOVER option is used to prevent reading from the next record if values for all variables in the
INPUT statement are not found.

filename inclass 'user.text.class';


infile inclass missover;

You can specify the pathname with a quoted literal also. The preceding statements could be written as
follows:

infile 'user.text.class' missover;

Using the INPUT Statement

Once you have referenced the data file that contains your data with an INFILE statement, you need to tell
IML the following information about how the data are arranged:

 the number of variables and their names

 each variables type, either numeric or character


Using the INPUT Statement F 113

 the format of each variables values

 the columns that correspond to each variable

In other words, you must tell IML how to read the data.
The INPUT statement describes the arrangement of values in an input record. The INPUT statement reads
records from a file specified in the previously executed INFILE statement, reading the values into IML
variables.
There are two ways to describe a records values in an IML INPUT statement:

 list (or scanning) input

 formatted input

Following are several examples of valid INPUT statements for the class data file, depending, of course, on
how the data are stored.
If the data are stored with a blank or a comma between fields, then list input can be used. For example, the
INPUT statement for the class data file might look as follows:

infile inclass;
input name $ sex $ age height weight;

These statements tell IML the following:

 There are five variables: NAME, SEX, AGE, HEIGHT and WEIGHT.

 Data fields are separated by commas or blanks.

 NAME and SEX are character variables, as indicated by the dollar sign ($).

 AGE, HEIGHT, and WEIGHT are numeric variables, the default.

The data must be stored in the same order in which the variables are listed in the INPUT statement. Other-
wise, you can use formatted input, which is column specific. Formatted input is the most flexible and can
handle any data file. Your INPUT statement for the class data file might look as follows:

infile inclass;
input @1 name $char8. @10 sex $char1. @15 age 2.0
@20 height 4.1 @25 weight 5.1;

These statements tell IML the following:

 NAME is a character variable; its value begins in column 1 (indicated by @1) and occupies eight
columns ($CHAR8.).

 SEX is a character variable; its value is found in column 10 ($CHAR1.).


114 F Chapter 8: File Access

 AGE is a numeric variable; its value is found in columns 15 and 16 and has no decimal places (2.0).

 HEIGHT is a numeric variable found in columns 20 through 23 with one decimal place implied (4.1).

 WEIGHT is a numeric variable found in columns 25 through 29 with one decimal place implied (5.1).

The next sections discuss these two modes of input.

List Input

If your data are recorded with a comma or one or more blanks between data fields, you can use list input to
read your data. If you have missing valuesthat is, unknown valuesthey must be represented by a period
(.) rather than a blank field.
When IML looks for a value, it skips past blanks and tab characters. Then it scans for a delimiter to the
value. The delimiter is a blank, a comma, or the end of the record. When the ampersand (&) format modifier
is used, IML looks for two blanks, a comma, or the end of the record.
The general form of the INPUT statement for list input is as follows:
INPUT variable < $ > < & > < . . . variable < $ > > < & > > ;

where

variable names the variable to be read by the INPUT statement.


$ indicates that the preceding variable is character.
& indicates that a character value can have a single embedded blank. Because a blank normally
indicates the end of a data value, use the ampersand format modifier to indicate the end of the
value with at least two blanks or a comma.

With list input, IML scans the input lines for values. Consider using list input in the following cases:

 when blanks or commas separate input values

 when periods rather than blanks represent missing values

List input is the default in several situations. Descriptions of these situations and the behavior of IML
follow:

 If no input format is specified for a variable, IML scans for a number.

 If a single dollar sign or ampersand format modifier is specified, IML scans for a character value. The
ampersand format modifier enables single embedded blanks to occur.

 If a format is given with width unspecified or zero, IML scans for the first blank or comma.

If the end of a record is encountered before IML finds a value, then the behavior is as described by the
record overflow options in the INFILE statement discussed in the section Using the INFILE Statement on
page 111.
Using the INPUT Statement F 115

When you read with list input, the order of the variables listed in the INPUT statement must agree with the
order of the values in the data file. For example, consider the following data:

Alice f 10 61 97
Beth f 11 64 105
Bill m 12 63 110

You can use list input to read these data by specifying the following INPUT statement:

input name $ sex $ age height weight;

N OTE : This statement implies that the variables are stored in the order given. That is, each line of data
contains a students name, sex, age, height, and weight in that order and separated by at least one blank or
by a comma.

Formatted Input

The alternative to list input is formatted input. An INPUT statement reading formatted input must have
a SAS informat after each variable. An informat gives the data type and field width of an input value.
Formatted input can be used with pointer controls and format modifiers. Note, however, that neither pointer
controls nor format modifiers are necessary for formatted input.

Pointer Control Features

Pointer controls reset the pointers column and line positions and tell the INPUT statement where to go to
read the data value. You use pointer controls to specify the columns and lines from which you want to read:

 Column pointer controls move the pointer to the column you specify.

 Line pointer controls move the pointer to the next line.

 Line hold controls keep the pointer on the current input line.

 Binary file indicator controls indicate that the input line is from a binary file.

Column Pointer Controls

Column pointer controls indicate in which column an input value starts. Column pointer controls begin with
either an at sign (@) or a plus sign (+). A complete list follows:

@n moves the pointer to column n.


@point-variable moves the pointer to the column given by the current value of point-variable.
@(expression) moves the pointer to the column given by the value of the expression. The expression
must evaluate to a positive integer.
Cn moves the pointer n columns.
116 F Chapter 8: File Access

Cpoint-variable moves the pointer the number of columns given by the value of point-variable.
C(expression) moves the pointer the number of columns given by the value of expression. The value of
expression can be positive or negative.

Here are some examples of using column pointer controls:

Example Meaning
@12 go to column 12
@N go to the column given by the value of N
@(N 1) go to the column given by the value of N 1
+5 skip 5 spaces
+N skip N spaces
+(N+1) skip N+1 spaces

In the earlier example that used formatted input, you used several pointer controls. Here are the statements:

infile inclass;
input @1 name $char8. @10 sex $char1. @15 age 2.0
@20 height 4.1 @25 weight 5.1;

The @1 moves the pointer to column 1, the @10 moves it to column 10, and so on. You move the pointer
to the column where the data field begins and then supply an informat specifying how many columns the
variable occupies. The INPUT statement could also be written as follows:

input @1 name $char8. +1 sex $char1. +4 age 2. +3 height 4.1


+1 weight 5.1;

In this form, you move the pointer to column 1 (@1) and read eight columns. The pointer is now at column
9. Now, move the pointer +1 columns to column 10 to read SEX. The $char1. informat says to read a
character variable occupying one column. After you read the value for SEX, the pointer is at column 11, so
move it to column 15 with +4 and read AGE in columns 15 and 16 (the 2. informat). The pointer is now at
column 17, so move +3 columns and read HEIGHT. The same idea applies for reading WEIGHT.

Line Pointer Control

The line pointer control (/) directs IML to skip to the next line of input. You need a line pointer control
when a record of data takes more than one line. You use the new line pointer control (/) to skip to the next
line and continue reading data. In the example reading the class data, you do not need to skip a line because
each line of data contains all the variables for a student.

Line Hold Control

The trailing at sign (@), when at the end of an INPUT statement, directs IML to hold the pointer on the
current record so that you can read more data with subsequent INPUT statements. You can use it to read
several records from a single line of data. Sometimes, when a record is very shortsay, 10 columns or
soyou can save space in your external file by coding several records on the same line.
Using the INPUT Statement F 117

Binary File Indicator Controls

When the external file you want to read is a binary file (RECFM=N is specified in the INFILE statement),
you must tell IML how to read the values by using the following binary file indicator controls:

>n start reading the next record at the byte position n in the file.
>point-variable start reading the next record at the byte position in the file given by point-variable.
>(expression) start reading the next record at the byte position in the file given by expression.
<n read the number of bytes indicated by the value of n.
<point-variable read the number of bytes indicated by the value of point-variable.
<(expression) read the number of bytes indicated by the value of expression.

Pattern Searching

You can have the input mechanism search for patterns of text by using the at sign (@) with a character
operand. IML starts searching at the current position, advances until it finds the pattern, and leaves the
pointer at the position immediately after the found pattern in the input record. For example, the following
statement searches for the pattern NAME= and then uses list input to read the value after the found pattern:

input @ 'NAME=' name $;

If the pattern is not found, then the pointer is left past the end of the record, and the rest of the INPUT state-
ment follows the conventions based on the options MISSOVER, STOPOVER, and FLOWOVER described
in the section Using the INFILE Statement on page 111. If you use pattern searching, you usually specify
the MISSOVER option so that you can control for the occurrences of the pattern not being found.
Notice that the MISSOVER feature enables you to search for a variety of items in the same record, even if
some of them are not found. For example, the following statements are able to read in the ADDR variable
even if NAME= is not found (in which case, NAME is unvalued):

infile in1 missover;


input @1 @ "NAME=" name $
@1 @ "ADDR=" addr &
@1 @ "PHONE=" phone $;

The pattern operand can use any characters except for the following:

% $ [] {} < > ? * # @ ` (backquote)

Record Directives

Each INPUT statement goes to a new record except in the following special cases:

 An at sign (@) at the end of an INPUT statement specifies that the record is to be held for future
INPUT statements.
118 F Chapter 8: File Access

 Binary files (RECFM=N) always hold their records until the > directive.

As discussed in the syntax of the INPUT statement, the line pointer operator (/) instructs the input mecha-
nism to go immediately to the next record. For binary (RECFM=N) files, the > directive is used instead of
the /.

Blanks

For character values, the informat determines the way blanks are interpreted. For example, the $CHARw.
format reads blanks as part of the whole value, while the BZw. format turns blanks into zeros. See SAS
Language Reference: Dictionary for more information about informats.

Missing Values

Missing values in formatted input are represented by blanks or a single period for a numeric value and by
blanks for a character value.

Matrix Use

Data values are either character or numeric. Input variables always result in scalar (one row by one column)
values with type (character or numeric) and length determined by the input format.

End-of-File Condition

End of file is the condition of trying to read a record when there are no more records to read from the file.
The consequences of an end-of-file condition are described as follows.

 All the variables in the INPUT statement that encountered end of file are freed of their values. You
can use the NROW or NCOL function to test if this has happened.

 If end of file occurs inside a DO DATA loop, execution is passed to the statement after the END
statement in the loop.

For text files, end of file is encountered first as the end of the last record. The next time input is attempted,
the end-of-file condition is raised.
For binary files, end of file can result in the input mechanism returning a record that is shorter than the
requested length. In this case IML still attempts to process the record, using the rules described in the
section Using the INFILE Statement on page 111.
The DO DATA mechanism provides a convenient mechanism for handling end of file.
For example, to read the class data from the external file USER.TEXT.CLASS into a SAS data set, you need
to perform the following steps:
Using the INPUT Statement F 119

1. Establish a fileref referencing the data file.

2. Use an INFILE statement to open the file for input.

3. Initialize any character variables by setting the length.

4. Create a new SAS data set with a CREATE statement. You want to list the variables you plan to input
in a VAR clause.

5. Use a DO DATA loop to read the data one line at a time.

6. Write an INPUT statement telling IML how to read the data.

7. Use an APPEND statement to add the new data line to the end of the new SAS data set.

8. End the DO DATA loop.

9. Close the new data set.

10. Close the external file with a CLOSEFILE statement.

Your statements should look as follows:

filename inclass 'user.text.class';


infile inclass missover;
name="12345678";
sex="1";
create class var{name sex age height weight};
do data;
input name $ sex $ age height weight;
append;
end;
close class;
closefile inclass;

Note that the APPEND statement is not executed if the INPUT statement reads past the end of file since
IML escapes the loop immediately when the condition is encountered.

Differences with the SAS DATA Step

If you are familiar with the SAS DATA step, you will notice that the following features are supported
differently or are not supported in IML:

 The pound sign (#) directive supporting multiple current records is not supported.

 Grouping parentheses are not supported.

 The colon (:) format modifier is not supported.

 The byte operands (< and >) are new features supporting binary files.
120 F Chapter 8: File Access

 The ampersand (&) format modifier causes IML to stop reading data if a comma is encountered. Use
of the ampersand format modifier is valid with list input only.

 The RECFM=F option is not supported.

Writing to an External File

If you have data in matrices and you want to write these data to an external file, you need to reference,
or point to, the file (as discussed in the section Referring to an External File on page 110. The FILE
statement opens the file for output so that you can write data to it. You need to specify a PUT statement to
direct how the data are output. These two statements are discussed in the following sections.

Using the FILE Statement

The FILE statement is used to refer to an external file. If you have values stored in matrices, you can write
these values to a file. Just as with the INFILE statement, you need a fileref to point to the file you want to
write to. You use a FILE statement to indicate that you want to write to rather than read from a file.
For example, if you want to output to the file USER.TEXT.NEWCLASS, you can specify the file with a
quoted literal pathname. Here is the statement:

> file 'user.text.newclass';

Otherwise, you can first establish a fileref and then refer to the file by its fileref, as follows:

> filename outclass 'user.text.class';


> file outclass;

There are two options you can use in the FILE statement:

RECFM=N specifies that the file is to be written as a pure binary file without record-separator
characters.
LRECL=operand specifies the size of the buffer to hold the records.

The FILE statement opens a file for output or, if the file is already open, makes it the current output file so
that subsequent PUT statements write to the file. The FILE statement is similar in syntax and operation to
the INFILE statement.
Using the PUT Statement F 121

Using the PUT Statement

The PUT statement writes lines to the SAS log, to the SAS output file, or to any external file specified in a
FILE statement. The file associated with the most recently executed FILE statement is the current output
file.
You can use the following arguments with the PUT statement:

variable names the IML variable with a value that is put to the current pointer position in the
record. The variable must be scalar valued. The put variable can be followed immediately
by an output format.
literal gives a literal to be put to the current pointer position in the record. The literal can be
followed immediately by an output format.
(expression) must produce a scalar-valued result. The expression can be immediately followed by an
output format.
format names the output formats for the values.
pointer-control moves the output pointer to a line or column.

Pointer Control Features

Most PUT statements need the added flexibility obtained with pointer controls. IML keeps track of its
position on each output line with a pointer. With specifications in the PUT statement, you can control
pointer movement from column to column and line to line. The pointer controls available are discussed in
the section Using the INPUT Statement on page 112.

Differences with the SAS DATA Step

If you are familiar with the SAS DATA step, you will notice that the following features are supported
differently or are not supported:

 The pound sign (#) directive supporting multiple current records is not supported.
 Grouping parentheses are not supported.
 The byte operands (< and >) are a new feature supporting binary files.

Examples

Writing a Matrix to an External File

If you have data stored in an n  m matrix and you want to output the values to an external file, you need to
write out the matrix element by element.
122 F Chapter 8: File Access

For example, suppose you have a matrix X that contains data that you want written to the file
USER.MATRIX. Suppose also that X contains ones and zeros so that the format for output can be one
column. You need to do the following:

1. Establish a fileref, such as OUT.

2. Use a FILE statement to open the file for output.

3. Specify a DO loop for the rows of the matrix.

4. Specify a DO loop for the columns of the matrix.

5. Use a PUT statement to specify how to write the element value.

6. End the inner DO loop.

7. Skip a line.

8. End the outer DO loop.

9. Close the file.

Your statements should look as follows:

filename out 'user.matrix';


file out;
do i = 1 to nrow(x);
do j = 1 to ncol(x);
put (x[i,j]) 1.0 +2 @;
end;
put;
end;
closefile out;

The output file contains a record for each row of the matrix. For example, if your matrix is 4  4, then the
file might look as follows:

1 1 0 1
1 0 0 1
1 1 1 0
0 1 0 1

Quick Printing to the PRINT File

You can use the FILE PRINT statement to route output to the standard print file. The following statements
generate data that are output to the PRINT file:
Listing Your External Files F 123

> file print;


> do a = 0 to 6.28 by .2;
> x = sin(a);
> p = (x+1)#30;
> put @1 a 6.4 +p x 8.4;
> end;

Here is the resulting output:

0.0000 0.0000
0.2000 0.1987
0.4000 0.3894
0.6000 0.5646
0.8000 0.7174
1.0000 0.8415
1.2000 0.9320
1.4000 0.9854
1.6000 0.9996
1.8000 0.9738
2.0000 0.9093
2.2000 0.8085
2.4000 0.6755
2.6000 0.5155
2.8000 0.3350
3.0000 0.1411
3.2000 -0.0584
3.4000 -0.2555
3.6000 -0.4425
3.8000 -0.6119
4.0000 -0.7568
4.2000 -0.8716
4.4000 -0.9516
4.6000 -0.9937
4.8000 -0.9962
5.0000 -0.9589
5.2000 -0.8835
5.4000 -0.7728
5.6000 -0.6313
5.8000 -0.4646
6.0000 -0.2794
6.2000 -0.0831

Listing Your External Files

To list all open files and their current input or current output status, use the SHOW FILES statement.
124 F Chapter 8: File Access

Closing an External File

The CLOSEFILE statement closes files opened by an INFILE or FILE statement. You specify the CLOSE-
FILE statement just as you do the INFILE or FILE statement. For example, the following statements open
the external file USER.TEXT.CLASS for input and then close it:

filename in 'user.text.class';
infile in;
closefile in;

Summary

In this chapter, you learned how to refer to, or point to, an external file by using a FILENAME statement.
You can use the FILENAME statement whether you want to read from or write to an external file. The file
can also be referenced by a quoted literal pathname. You also learned about the difference between a text
file and a binary file.
You learned how to read data from an external file with the INFILE and INPUT statements, using either list
or formatted input. You learned how to write your matrices to an external file by using the FILE and PUT
statements. Finally, you learned how to close your files.
Chapter 9

General Statistics Examples

Contents
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
General Statistics Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Example 9.1: Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Example 9.2: Newtons Method for Solving Nonlinear Systems of Equations . . . . . 127
Example 9.3: Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Example 9.4: Alpha Factor Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Example 9.5: Categorical Linear Models . . . . . . . . . . . . . . . . . . . . . . . . 134
Example 9.6: Regression of Subsets of Variables . . . . . . . . . . . . . . . . . . . 138
Example 9.7: Response Surface Methodology . . . . . . . . . . . . . . . . . . . . . 144
Example 9.8: Logistic and Probit Regression for Binary Response Models . . . . . . 147
Example 9.9: Linear Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Example 9.10: Quadratic Programming . . . . . . . . . . . . . . . . . . . . . . . . . 155
Example 9.11: Regression Quantiles . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Example 9.12: Simulations of a Univariate ARMA Process . . . . . . . . . . . . . . 161
Example 9.13: Parameter Estimation for a Regression Model with ARMA Errors . . 163
Example 9.14: Iterative Proportional Fitting . . . . . . . . . . . . . . . . . . . . . . 170
Example 9.15: Full-Screen Nonlinear Regression . . . . . . . . . . . . . . . . . . . 172
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

Overview

SAS/IML software has many linear operators that perform high-level operations commonly needed in ap-
plying linear algebra techniques to data analysis. The similarity of the Interactive Matrix Language notation
and matrix algebra notation makes translation from algorithm to program a straightforward task. The exam-
ples in this chapter show a variety of matrix operators at work.
You can use these examples to gain insight into the more complex problems you might need to solve. Some
of the examples perform the same analyses as performed by procedures in SAS/STAT software and are not
meant to replace them. The examples are included as learning tools.
126 F Chapter 9: General Statistics Examples

General Statistics Examples

Example 9.1: Correlation

The following statements show how you can define modules to compute correlation coefficients between
numeric variables and standardized values for a set of data. For more efficient computations, use the built-in
CORR function and the STD function.

proc iml;
/* Module to compute correlations */
start corr;
n = nrow(x); /* number of observations */
sum = x[+,] ; /* compute column sums */
xpx = t(x)*x-t(sum)*sum/n; /* compute sscp matrix */
s = diag(1/sqrt(vecdiag(xpx))); /* scaling matrix */
corr = s*xpx*s; /* correlation matrix */
print "Correlation Matrix",,corr[rowname=nm colname=nm] ;
finish corr;

/* Module to standardize data */


start std;
mean = x[+,] /n; /* means for columns */
x = x-repeat(mean,n,1); /* center x to mean zero */
ss = x[##,] ; /* sum of squares for columns */
std = sqrt(ss/(n-1)); /* standard deviation estimate*/
x = x*diag(1/std); /* scaling to std dev 1 */
print ,"Standardized Data",,X[colname=nm] ;
finish std;

/* Sample run */
x = { 1 2 3,
3 2 1,
4 2 1,
0 4 1,
24 1 0,
1 3 8};
nm={age weight height};
run corr;
run std;

The results are shown in Output 9.1.1.


Example 9.2: Newtons Method for Solving Nonlinear Systems of Equations F 127

Output 9.1.1 Correlation Coefficients and Standardized Values

Correlation Matrix

corr
AGE WEIGHT HEIGHT

AGE 1 -0.717102 -0.436558


WEIGHT -0.717102 1 0.3508232
HEIGHT -0.436558 0.3508232 1

Standardized Data

x
AGE WEIGHT HEIGHT

-0.490116 -0.322749 0.2264554


-0.272287 -0.322749 -0.452911
-0.163372 -0.322749 -0.452911
-0.59903 1.6137431 -0.452911
2.0149206 -1.290994 -0.792594
-0.490116 0.6454972 1.924871

Example 9.2: Newtons Method for Solving Nonlinear Systems of Equations

This example solves a nonlinear system of equations by Newtons method. Let the nonlinear system be
represented by

F .x/ D 0

where x is a vector and F is a vector-valued, possibly nonlinear function.


In order to find x such that F goes to 0, an initial estimate x0 is chosen, and Newtons iterative method for
converging to the solution is used:
1
xnC1 D xn J .xn /F .xn /

where J.x/ is the Jacobian matrix of partial derivatives of F with respect to x. (For more efficient computa-
tions, use the built-in NLPNRA subroutine.)
For optimization problems, the same method is used, where F .x/ is the gradient of the objective function
and J.x/ becomes the Hessian (Newton-Raphson).
In this example, the system to be solved is

x1 C x2 x1 x2 C 2 D 0
x1 exp. x2 / 1 D 0

The following statements are organized into three modules: NEWTON, FUN, and DERIV.
128 F Chapter 9: General Statistics Examples

/* Newton's Method to Solve a Nonlinear Function */


/* The user must supply initial values, */
/* and the FUN and DERIV functions. */
/* On entry: FUN evaluates the function f in terms of x */
/* initial values are given to x */
/* DERIV evaluates jacobian j */
/* tuning variables: CONVERGE, MAXITER. */
/* On exit: solution in x, function value in f close to 0 */
/* ITER has number of iterations. */
proc iml;
start newton;
run fun; /* evaluate function at starting values */
do iter = 1 to maxiter /* iterate until maxiter */
while(max(abs(f))>converge); /* iterations or convergence */
run deriv; /* evaluate derivatives in j */
delta = -solve(j,f); /* solve for correction vector */
x = x+delta; /* the new approximation */
run fun; /* evaluate the function */
end;
finish newton;

maxiter = 15; /* default maximum iterations */


converge = .000001; /* default convergence criterion */

/* User-supplied function evaluation */


start fun;
x1 = x[1] ;
x2 = x[2] ; /* extract the values */
f = (x1+x2-x1*x2+2) //
(x1*exp(-x2)-1); /* evaluate the function */
finish fun;

/* User-supplied derivatives of the function */


start deriv;
/* evaluate jacobian */
j = ((1-x2)||(1-x1) ) // (exp(-x2)||(-x1*exp(-x2)));
finish deriv;

do;
print "Solving the system: X1+X2-X1*X2+2=0, X1*EXP(-X2)-1=0" ,;
x={.1, -2}; /* starting values */
run newton;
print x f;
end;
Example 9.3: Regression F 129

The results are shown in Output 9.2.1.

Output 9.2.1 Newtons Method: Results

Solving the system: X1+X2-X1*X2+2=0, X1*EXP(-X2)-1=0

x f

0.0977731 5.3523E-9
-2.325106 6.1501E-8

Example 9.3: Regression

This example shows a regression module that calculates statistics that are associated with a linear regres-
sion.

/* Regression Routine */
/* Given X and Y, this fits Y = X B + E */
/* by least squares. */
proc iml;
start reg;
n = nrow(x); /* number of observations */
k = ncol(x); /* number of variables */
xpx = x`*x; /* crossproducts */
xpy = x`*y;
xpxi = inv(xpx); /* inverse crossproducts */
b = xpxi*xpy; /* parameter estimates */
yhat = x*b; /* predicted values */
resid = y-yhat; /* residuals */
sse = resid`*resid; /* sum of squared errors */
dfe = n-k; /* degrees of freedom error */
mse = sse/dfe; /* mean squared error */
rmse = sqrt(mse); /* root mean squared error */
covb = xpxi#mse; /* covariance of estimates */
stdb = sqrt(vecdiag(covb)); /* standard errors */
t = b/stdb; /* ttest for estimates=0 */
probt = 1-probf(t#t,1,dfe); /* significance probability */
print name b stdb t probt;
s = diag(1/stdb);
corrb = s*covb*s; /* correlation of estimates */
print ,"Covariance of Estimates", covb[r=name c=name] ,
"Correlation of Estimates",corrb[r=name c=name] ;

if nrow(tval)=0 then return; /* is a t value specified? */


projx = x*xpxi*x`; /* hat matrix */
vresid = (i(n)-projx)*mse; /* covariance of residuals */
vpred = projx#mse; /* covariance of predicted values */
h = vecdiag(projx); /* hat leverage values */
lowerm = yhat-tval#sqrt(h*mse); /* low. conf lim for mean */
upperm = yhat+tval#sqrt(h*mse); /* upper lim. for mean */
130 F Chapter 9: General Statistics Examples

lower = yhat-tval#sqrt(h*mse+mse); /* lower lim. for indiv*/


upper = yhat+tval#sqrt(h*mse+mse);/* upper lim. for indiv */
print ,,"Predicted Values, Residuals, and Limits" ,,
y yhat resid h lowerm upperm lower upper;
finish reg;

/* Routine to test a linear combination of the estimates */


/* given L, this routine tests hypothesis that LB = 0. */

start test;
dfn=nrow(L);
Lb=L*b;
vLb=L*xpxi*L`;
q=Lb`*inv(vLb)*Lb /dfn;
f=q/mse;
prob=1-probf(f,dfn,dfe);
print ,f dfn dfe prob;
finish test;

/* Run it on population of U.S. for decades beginning 1790 */

x= { 1 1 1,
1 2 4,
1 3 9,
1 4 16,
1 5 25,
1 6 36,
1 7 49,
1 8 64 };

y= {3.929,5.308,7.239,9.638,12.866,17.069,23.191,31.443};
name={"Intercept", "Decade", "Decade**2" };
tval=2.57; /* for 5 df at 0.025 level to get 95% conf. int. */
reset fw=7;
run reg;
do;
print ,"TEST Coef for Linear";
L={0 1 0 };
run test;
print ,"TEST Coef for Linear,Quad";
L={0 1 0,0 0 1};
run test;
print ,"TEST Linear+Quad = 0";
L={0 1 1 };
run test;
end;

The results are shown in Output 9.3.1.


Example 9.3: Regression F 131

Output 9.3.1 Regression Results

name b stdb t probt

Intercept 5.06934 0.96559 5.24997 0.00333


Decade -1.1099 0.4923 -2.2546 0.07385
Decade**2 0.53964 0.0534 10.106 0.00016

Covariance of Estimates

covb
Intercept Decade Decade**2

Intercept 0.93237 -0.4362 0.04277


Decade -0.4362 0.24236 -0.0257
Decade**2 0.04277 -0.0257 0.00285

Correlation of Estimates

corrb
Intercept Decade Decade**2

Intercept 1 -0.9177 0.8295


Decade -0.9177 1 -0.9762
Decade**2 0.8295 -0.9762 1

Predicted Values, Residuals, and Limits

y yhat resid h lowerm upperm lower upper

3.929 4.49904 -0.57 0.70833 3.00202 5.99606 2.17419 6.82389


5.308 5.00802 0.29998 0.27976 4.06721 5.94883 2.99581 7.02023
7.239 6.59627 0.64273 0.23214 5.73926 7.45328 4.62185 8.57069
9.638 9.26379 0.37421 0.27976 8.32298 10.2046 7.25158 11.276
12.866 13.0106 -0.1446 0.27976 12.0698 13.9514 10.9984 15.0228
17.069 17.8367 -0.7677 0.23214 16.9797 18.6937 15.8622 19.8111
23.191 23.742 -0.551 0.27976 22.8012 24.6828 21.7298 25.7542
31.443 30.7266 0.71638 0.70833 29.2296 32.2236 28.4018 33.0515

TEST Coef for Linear

f dfn dfe prob

5.08317 1 5 0.07385

TEST Coef for Linear,Quad

f dfn dfe prob

666.511 2 5 8.54E-7

TEST Linear+Quad = 0
132 F Chapter 9: General Statistics Examples

Output 9.3.1 continued

f dfn dfe prob

1.67746 1 5 0.25184

Example 9.4: Alpha Factor Analysis

This example shows how an algorithm for computing alpha factor patterns (Kaiser and Caffrey 1965) is
implemented in the SAS/IML language.
You can store the following ALPHA subroutine in a catalog and load it when needed.

/* Alpha Factor Analysis */


/* Ref: Kaiser et al., 1965 Psychometrika, pp. 12-13 */
/* r correlation matrix (n.s.) already set up */
/* p number of variables */
/* q number of factors */
/* h communalities */
/* m eigenvalues */
/* e eigenvectors */
/* f factor pattern */
/* (IQ,H2,HI,G,MM) temporary use. freed up */
/* */
proc iml;
start alpha;
p = ncol(r);
q = 0;
h = 0; /* initialize */
h2 = i(p)-diag(1/vecdiag(inv(r))); /* smcs */
do while(max(abs(h-h2))>.001); /* iterate until converges */
h = h2;
hi = diag(sqrt(1/vecdiag(h)));
g = hi*(r-i(p))*hi+i(p);
call eigen(m,e,g); /* get eigenvalues and vecs */
if q=0 then do;
q = sum(m>1); /* number of factors */
iq = 1:q;
end; /* index vector */
mm = diag(sqrt(m[iq,])); /* collapse eigvals */
e = e[,iq] ; /* collapse eigvecs */
h2 = h*diag((e*mm) [,##]); /* new communalities */
end;
hi = sqrt(h);
h = vecdiag(h2);
f = hi*e*mm; /* resulting pattern */
free iq h2 hi g mm; /* free temporaries */
finish;

/* Correlation Matrix from Harmon, Modern Factor Analysis, */


Example 9.4: Alpha Factor Analysis F 133

/* Second edition, page 124, "Eight Physical Variables" */

r={1.000 .846 .805 .859 .473 .398 .301 .382 ,


.846 1.000 .881 .826 .376 .326 .277 .415 ,
.805 .881 1.000 .801 .380 .319 .237 .345 ,
.859 .826 .801 1.000 .436 .329 .327 .365 ,
.473 .376 .380 .436 1.000 .762 .730 .629 ,
.398 .326 .319 .329 .762 1.000 .583 .577 ,
.301 .277 .237 .327 .730 .583 1.000 .539 ,
.382 .415 .345 .365 .629 .577 .539 1.000};
nm = {Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8};
run alpha;
print ,"EIGENVALUES" , m;
print ,"COMMUNALITIES" , h[rowname=nm];
print ,"FACTOR PATTERN", f[rowname=nm];

The results are shown in Output 9.4.1.

Output 9.4.1 Alpha Factor Analysis: Results

EIGENVALUES

5.937855
2.0621956
0.1390178
0.0821054
0.018097
-0.047487
-0.09148
-0.100304

COMMUNALITIES

VAR1 0.8381205
VAR2 0.8905717
VAR3 0.81893
VAR4 0.8067292
VAR5 0.8802149
VAR6 0.6391977
VAR7 0.5821583
VAR8 0.4998126

FACTOR PATTERN
134 F Chapter 9: General Statistics Examples

Output 9.4.1 continued

VAR1 0.813386 -0.420147


VAR2 0.8028363 -0.49601
VAR3 0.7579087 -0.494474
VAR4 0.7874461 -0.432039
VAR5 0.8051439 0.4816205
VAR6 0.6804127 0.4198051
VAR7 0.620623 0.4438303
VAR8 0.6449419 0.2895902

Example 9.5: Categorical Linear Models

This example fits a linear model to a function of the response probabilities

K log  D X C e

where K is a matrix that compares each response category to the last. Data are from Kastenbaum and
Lamphiear (1959). First, the Grizzle-Starmer-Koch (1969) approach is used to obtain generalized least
squares estimates of . These form the initial values for the Newton-Raphson solution for the maximum
likelihood estimates. The CATMOD procedure can also be used to analyze these binary data (see Cox
(1970)). Here is the program.

/* Categorical Linear Models */


/* by Least Squares and Maximum Likelihood */
/* CATLIN */
/* Input: */
/* n the s by p matrix of response counts */
/* x the s by r design matrix */

proc iml ;
start catlin;

/*---find dimensions---*/
s = nrow(n); /* number of populations */
r = ncol(n); /* number of responses */
q = r-1; /* number of function values */
d = ncol(x); /* number of design parameters */
qd = q*d; /* total number of parameters */

/*---get probability estimates---*/


rown = n[,+]; /* row totals */
pr = n/(rown*repeat(1,1,r)); /* probability estimates */
p = shape(pr[,1:q] ,0,1); /* cut and shaped to vector */
print "INITIAL PROBABILITY ESTIMATES" ,pr;

/* estimate by the GSK method */

/* function of probabilities */
Example 9.5: Categorical Linear Models F 135

f = log(p)-log(pr[,r])@repeat(1,q,1);

/* inverse covariance of f */
si = (diag(p)-p*p`)#(diag(rown)@repeat(1,q,q));
z = x@i(q); /* expanded design matrix */
h = z`*si*z; /* crossproducts matrix */
g = z`*si*f; /* cross with f */
beta = solve(h,g); /* least squares solution */
stderr = sqrt(vecdiag(inv(h))); /* standard errors */
run prob;
print ,"GSK ESTIMATES" , beta stderr ,pi;

/* iterations for ML solution */


crit = 1;
do it = 1 to 8 while(crit>.0005);/* iterate until converge*/

/* block diagonal weighting */


si = (diag(pi)-pi*pi`)#(diag(rown)@repeat(1,q,q));
g = z`*(rown@repeat(1,q,1)#(p-pi)); /* gradient */
h = z`*si*z; /* hessian */
delta = solve(h,g); /* solve for correction */
beta = beta+delta; /* apply the correction */
run prob; /* compute prob estimates */
crit = max(abs(delta)); /* convergence criterion */
end;
stderr = sqrt(vecdiag(inv(h))); /* standard errors */
print , "ML Estimates", beta stderr, pi;
print , "Iterations" it "Criterion" crit;
finish catlin;

/* subroutine to compute new prob estimates @ parameters */


start prob;
la = exp(x*shape(beta,0,q));
pi = la/((1+la[,+] )*repeat(1,1,q));
pi = shape(pi,0,1);
finish prob;

/*---prepare frequency data and design matrix---*/


n= { 58 11 05,
75 19 07,
49 14 10,
58 17 08,
33 18 15,
45 22 10,
15 13 15,
39 22 18,
04 12 17,
05 15 08}; /* frequency counts*/

x= { 1 1 1 0 0 0,
1 -1 1 0 0 0,
1 1 0 1 0 0,
1 -1 0 1 0 0,
1 1 0 0 1 0,
136 F Chapter 9: General Statistics Examples

1 -1 0 0 1 0,
1 1 0 0 0 1,
1 -1 0 0 0 1,
1 1 -1 -1 -1 -1,
1 -1 -1 -1 -1 -1}; /* design matrix*/

run catlin;

The maximum likelihood estimates are shown in Output 9.5.1.

Output 9.5.1 Maximum Likelihood Estimates

INITIAL PROBABILITY ESTIMATES

pr

0.7837838 0.1486486 0.0675676


0.7425743 0.1881188 0.0693069
0.6712329 0.1917808 0.1369863
0.6987952 0.2048193 0.0963855
0.5 0.2727273 0.2272727
0.5844156 0.2857143 0.1298701
0.3488372 0.3023256 0.3488372
0.4936709 0.278481 0.2278481
0.1212121 0.3636364 0.5151515
0.1785714 0.5357143 0.2857143

GSK ESTIMATES

beta stderr

0.9454429 0.1290925
0.4003259 0.1284867
-0.277777 0.1164699
-0.278472 0.1255916
1.4146936 0.267351
0.474136 0.294943
0.8464701 0.2362639
0.1526095 0.2633051
0.1952395 0.2214436
0.0723489 0.2366597
-0.514488 0.2171995
-0.400831 0.2285779
Example 9.5: Categorical Linear Models F 137

Output 9.5.1 continued

pi

0.7402867
0.1674472
0.7704057
0.1745023
0.6624811
0.1917744
0.7061615
0.2047033
0.516981
0.2648871
0.5697446
0.2923278
0.3988695
0.2589096
0.4667924
0.3034204
0.1320359
0.3958019
0.1651907
0.4958784

ML Estimates

beta stderr

0.9533597 0.1286179
0.4069338 0.1284592
-0.279081 0.1156222
-0.280699 0.1252816
1.4423195 0.2669357
0.4993123 0.2943437
0.8411595 0.2363089
0.1485875 0.2635159
0.1883383 0.2202755
0.0667313 0.236031
-0.527163 0.216581
-0.414965 0.2299618
138 F Chapter 9: General Statistics Examples

Output 9.5.1 continued

pi

0.7431759
0.1673155
0.7723266
0.1744421
0.6627266
0.1916645
0.7062766
0.2049216
0.5170782
0.2646857
0.5697771
0.292607
0.3984205
0.2576653
0.4666825
0.3027898
0.1323243
0.3963114
0.165475
0.4972044

it crit

Iterations 3 Criterion 0.0004092

Example 9.6: Regression of Subsets of Variables

This example performs regression with variable selection. Some of the methods used in this example are
also used in the REG procedure. Here is the program.

proc iml;

/*-------Initialization-------------------------------*
| c,csave the crossproducts matrix |
| n number of observations |
| k total number of variables to consider |
| l number of variables currently in model |
| in a 0-1 vector of whether variable is in |
| b print collects results (L MSE RSQ BETAS ) |
*----------------------------------------------------*/
start initial;
n=nrow(x); k=ncol(x); k1=k+1; ik=1:k;
bnames={nparm mse rsquare} ||varnames;

/*---correct by mean, adjust out intercept parameter---*/


y=y-y[+,]/n; /* correct y by mean */
x=x-repeat(x[+,]/n,n,1); /* correct x by mean */
xpy=x`*y; /* crossproducts */
Example 9.6: Regression of Subsets of Variables F 139

ypy=y`*y;
xpx=x`*x;
free x y; /* no longer need the data*/
csave=(xpx || xpy) //
(xpy`|| ypy); /* save copy of crossproducts*/
finish;

/*-----forward method------------------------------------------*/
start forward;
print "FORWARD SELECTION METHOD";
free bprint;
c=csave; in=repeat(0,k,1); L=0; /* no variables are in */
dfe=n-1; mse=ypy/dfe;
sprob=0;

do while(sprob<.15 & l<k);


indx=loc(^in); /* where are the variables not in?*/
cd=vecdiag(c)[indx,]; /* xpx diagonals */
cb=c[indx,k1]; /* adjusted xpy */
tsqr=cb#cb/(cd#mse); /* squares of t tests */
imax=tsqr[<:>,]; /* location of maximum in indx */
sprob=(1-probt(sqrt(tsqr[imax,]),dfe))*2;
if sprob<.15 then do; /* if t-test significant */
ii=indx[,imax]; /* pick most significant */
run swp; /* routine to sweep */
run bpr; /* routine to collect results */
end;
end;

print bprint[colname=bnames] ;
finish;

/*-----backward method----------------------------------------*/
start backward;
print "BACKWARD ELIMINATION ";
free bprint;
c=csave; in=repeat(0,k,1);
ii=1:k; run swp; run bpr; /* start with all variables in*/
sprob=1;

do while(sprob>.15 & L>0);


indx=loc(in); /* where are the variables in? */
cd=vecdiag(c)[indx,]; /* xpx diagonals */
cb=c[indx,k1]; /* bvalues */
tsqr=cb#cb/(cd#mse); /* squares of t tests */
imin=tsqr[>:<,]; /* location of minimum in indx */
sprob=(1-probt(sqrt(tsqr[imin,]),dfe))*2;
if sprob>.15 then do; /* if t-test nonsignificant */
ii=indx[,imin]; /* pick least significant */
run swp; /* routine to sweep in variable*/
run bpr; /* routine to collect results */
140 F Chapter 9: General Statistics Examples

end;
end;

print bprint[colname=bnames] ;
finish;

/*-----stepwise method-----------------------------------------*/
start stepwise;
print "STEPWISE METHOD";
free bprint;
c=csave; in=repeat(0,k,1); L=0;
dfe=n-1; mse=ypy/dfe;
sprob=0;

do while(sprob<.15 & L<k);


indx=loc(^in); /* where are the variables not in?*/
nindx=loc(in); /* where are the variables in? */
cd=vecdiag(c)[indx,]; /* xpx diagonals */
cb=c[indx,k1]; /* adjusted xpy */
tsqr=cb#cb/cd/mse; /* squares of t tests */
imax=tsqr[<:>,]; /* location of maximum in indx */
sprob=(1-probt(sqrt(tsqr[imax,]),dfe))*2;
if sprob<.15 then do; /* if t-test significant */
ii=indx[,imax]; /* find index into c */
run swp; /* routine to sweep */
run backstep; /* check if remove any terms */
run bpr; /* routine to collect results */
end;
end;

print bprint[colname=bnames] ;
finish;

/*----routine to backwards-eliminate for stepwise--*/


start backstep;
if nrow(nindx)=0 then return;
bprob=1;
do while(bprob>.15 & L<k);
cd=vecdiag(c)[nindx,]; /* xpx diagonals */
cb=c[nindx,k1]; /* bvalues */
tsqr=cb#cb/(cd#mse); /* squares of t tests */
imin=tsqr[>:<,]; /* location of minimum in nindx*/
bprob=(1-probt(sqrt(tsqr[imin,]),dfe))*2;
if bprob>.15 then do;
ii=nindx[,imin];
run swp;
run bpr;
end;
end;
finish;
Example 9.6: Regression of Subsets of Variables F 141

/*-----search all possible models----------------------------*/


start all;
/*---use method of schatzoff et al. for search technique--*/
betak=repeat(0,k,k); /* record estimates for best l-param model*/
msek=repeat(1e50,k,1);/* record best mse per # parms */
rsqk=repeat(0,k,1); /* record best rsquare * /
ink=repeat(0,k,k); /* record best set per # parms */
limit=2##k-1; /* number of models to examine */

c=csave; in=repeat(0,k,1);/* start out with no variables in model*/

do kk=1 to limit;
run ztrail; /* find which one to sweep */
run swp; /* sweep it in */
bb=bb//(L||mse||rsq||(c[ik,k1]#in)`);
if mse<msek[L,] then do; /* was this best for L parms? */
msek[L,]=mse; /* record mse */
rsqk[L,]=rsq; /* record rsquare */
ink[,L]=in; /* record which parms in model*/
betak[L,]=(c[ik,k1]#in)`;/* record estimates */
end;
end;

print "ALL POSSIBLE MODELS IN SEARCH ORDER";


print bb[colname=bnames]; free bb;

bprint=ik`||msek||rsqk||betak;
print "THE BEST MODEL FOR EACH NUMBER OF PARAMETERS";
print bprint[colname=bnames];
finish;

/*-subroutine to find number of trailing zeros in binary number*/


/* on entry: kk is the number to examine */
/* on exit: ii has the result */
/*-------------------------------------------------------------*/
start ztrail;
ii=1; zz=kk;
do while(mod(zz,2)=0); ii=ii+1; zz=zz/2; end;
finish;

/*-----subroutine to sweep in a pivot--------------------------*/


/* on entry: ii has the position(s) to pivot */
/* on exit: in, L, dfe, mse, rsq recalculated */
/*-------------------------------------------------------------*/
start swp;
if abs(c[ii,ii])<1e-9 then do; print "failure", c;stop;end;
c=sweep(c,ii);
in[ii,]=^in[ii,];
L=sum(in); dfe=n-1-L;
sse=c[k1,k1];
mse=sse/dfe;
142 F Chapter 9: General Statistics Examples

rsq=1-sse/ypy;
finish;

/*-----subroutine to collect bprint results--------------------*/


/* on entry: L,mse,rsq, and c set up to collect */
/* on exit: bprint has another row */
/*-------------------------------------------------------------*/
start bpr;
bprint=bprint//(L||mse||rsq||(c[ik,k1]#in)`);
finish;

/*--------------stepwise methods---------------------*/
/* after a call to the initial routine, which sets up*/
/* the data, four different routines can be called */
/* to do four different model-selection methods. */
/*---------------------------------------------------*/
start seq;
run initial; /* initialization */
run all; /* all possible models */
run forward; /* foreward selection method */
run backward; /* backward elimination method*/
run stepwise; /* stepwise method */
finish;

/*------------------------data on physical fitness--------------*


| These measurements were made on men involved in a physical |
| fitness course at N.C.State Univ. The variables are age(years)|
| weight(kg), oxygen uptake rate(ml per kg body weight per |
| minute), time to run 1.5 miles(minutes), heart rate while |
| resting, heart rate while running (same time oxygen rate |
| measured), and maximum heart rate recorded while running. |
| Certain values of maxpulse were modified for consistency. |
| Data courtesy DR. A.C. Linnerud |
*---------------------------------------------------------------*/
data =
{ 44 89.47 44.609 11.37 62 178 182 ,
40 75.07 45.313 10.07 62 185 185 ,
44 85.84 54.297 8.65 45 156 168 ,
42 68.15 59.571 8.17 40 166 172 ,
38 89.02 49.874 9.22 55 178 180 ,
47 77.45 44.811 11.63 58 176 176 ,
40 75.98 45.681 11.95 70 176 180 ,
43 81.19 49.091 10.85 64 162 170 ,
44 81.42 39.442 13.08 63 174 176 ,
38 81.87 60.055 8.63 48 170 186 ,
44 73.03 50.541 10.13 45 168 168 ,
45 87.66 37.388 14.03 56 186 192 ,
45 66.45 44.754 11.12 51 176 176 ,
47 79.15 47.273 10.60 47 162 164 ,
54 83.12 51.855 10.33 50 166 170 ,
Example 9.6: Regression of Subsets of Variables F 143

49 81.42 49.156 8.95 44 180 185 ,


51 69.63 40.836 10.95 57 168 172 ,
51 77.91 46.672 10.00 48 162 168 ,
48 91.63 46.774 10.25 48 162 164 ,
49 73.37 50.388 10.08 67 168 168 ,
57 73.37 39.407 12.63 58 174 176 ,
54 79.38 46.080 11.17 62 156 165 ,
52 76.32 45.441 9.63 48 164 166 ,
50 70.87 54.625 8.92 48 146 155 ,
51 67.25 45.118 11.08 48 172 172 ,
54 91.63 39.203 12.88 44 168 172 ,
51 73.71 45.790 10.47 59 186 188 ,
57 59.08 50.545 9.93 49 148 155 ,
49 76.32 48.673 9.40 56 186 188 ,
48 61.24 47.920 11.50 52 170 176 ,
52 82.78 47.467 10.50 53 170 172 };

x=data[,{1 2 4 5 6 7 }];
y=data[,3];
free data;
varnames={age weight runtime rstpuls runpuls maxpuls};
reset fw=6 linesize=87;
run seq;

The results are shown in Output 9.6.1.

Output 9.6.1 Model Selection: Results

ALL POSSIBLE MODELS IN SEARCH ORDER

bb
NPARM MSE RSQUARE AGE WEIGHT RUNTIME RSTPULS RUNPULS MAXPULS

1 26.634 0.0928 -0.311 0 0 0 0 0


2 25.826 0.1506 -0.37 -0.158 0 0 0 0
1 28.58 0.0265 0 -0.104 0 0 0 0
2 7.7556 0.7449 0 -0.025 -3.289 0 0 0
3 7.2263 0.7708 -0.174 -0.054 -3.14 0 0 0
2 7.1684 0.7642 -0.15 0 -3.204 0 0 0
1 7.5338 0.7434 0 0 -3.311 0 0 0
2 7.7983 0.7435 0 0 -3.287 -0.01 0 0
3 7.3361 0.7673 -0.168 0 -3.079 -0.045 0 0
4 7.3666 0.775 -0.196 -0.059 -2.989 -0.053 0 0
3 8.0373 0.7451 0 -0.026 -3.263 -0.01 0 0
2 24.915 0.1806 0 -0.093 0 -0.275 0 0
3 20.28 0.3568 -0.447 -0.156 0 -0.322 0 0
2 21.276 0.3003 -0.389 0 0 -0.323 0 0
1 24.676 0.1595 0 0 0 -0.279 0 0

...<rows skipped>...

THE BEST MODEL FOR EACH NUMBER OF PARAMETERS


144 F Chapter 9: General Statistics Examples

Output 9.6.1 continued

bprint
NPARM MSE RSQUARE AGE WEIGHT RUNTIME RSTPULS RUNPULS MAXPULS

1 7.5338 0.7434 0 0 -3.311 0 0 0


2 7.1684 0.7642 -0.15 0 -3.204 0 0 0
3 5.9567 0.8111 -0.256 0 -2.825 0 -0.131 0
4 5.3435 0.8368 -0.198 0 -2.768 0 -0.348 0.2705
5 5.1763 0.848 -0.22 -0.072 -2.683 0 -0.373 0.3049
6 5.3682 0.8487 -0.227 -0.074 -2.629 -0.022 -0.37 0.3032

FORWARD SELECTION METHOD

bprint
NPARM MSE RSQUARE AGE WEIGHT RUNTIME RSTPULS RUNPULS MAXPULS

1 7.5338 0.7434 0 0 -3.311 0 0 0


2 7.1684 0.7642 -0.15 0 -3.204 0 0 0
3 5.9567 0.8111 -0.256 0 -2.825 0 -0.131 0
4 5.3435 0.8368 -0.198 0 -2.768 0 -0.348 0.2705

BACKWARD ELIMINATION

bprint
NPARM MSE RSQUARE AGE WEIGHT RUNTIME RSTPULS RUNPULS MAXPULS

6 5.3682 0.8487 -0.227 -0.074 -2.629 -0.022 -0.37 0.3032


5 5.1763 0.848 -0.22 -0.072 -2.683 0 -0.373 0.3049
4 5.3435 0.8368 -0.198 0 -2.768 0 -0.348 0.2705

STEPWISE METHOD

bprint
NPARM MSE RSQUARE AGE WEIGHT RUNTIME RSTPULS RUNPULS MAXPULS

1 7.5338 0.7434 0 0 -3.311 0 0 0


2 7.1684 0.7642 -0.15 0 -3.204 0 0 0
3 5.9567 0.8111 -0.256 0 -2.825 0 -0.131 0
4 5.3435 0.8368 -0.198 0 -2.768 0 -0.348 0.2705

Example 9.7: Response Surface Methodology

A regression model with a complete quadratic set of regressions across several factors can be processed to
yield the estimated critical values that can optimize a response. First, the regression is performed for two
variables according to the model

y D c C b1 x1 C b2 x2 C a11 x12 C a12 x1 x2 C a22 x22 C e


Example 9.7: Response Surface Methodology F 145

The estimates are then divided into a vector of linear coefficients (estimates) b and a matrix of quadratic
coefficients A. The solution for critical values is
1 1
xD A b
2
The following program creates a module to perform quadratic response surface regression.

/* Quadratic Response Surface Regression */


/* This matrix routine reads in the factor variables and */
/* the response, forms the quadratic regression model and */
/* estimates the parameters, and then solves for the optimal */
/* response, prints the optimal factors and response, and */
/* displays the eigenvalues and eigenvectors of the */
/* matrix of quadratic parameter estimates to determine if */
/* the solution is a maximum or minimum, or saddlepoint, and */
/* which direction has the steepest and gentlest slopes. */
/* */
/* Given that d contains the factor variables, */
/* and y contains the response. */
/* */

start rsm;
n=nrow(d);
k=ncol(d); /* dimensions */
x=j(n,1,1)||d; /* set up design matrix */
do i=1 to k;
do j=1 to i;
x=x||d[,i] #d[,j];
end;
end;
beta=solve(x`*x,x`*y); /* solve parameter estimates */
print "Parameter Estimates" , beta;
c=beta[1]; /* intercept estimate */
b=beta[2:(k+1)]; /* linear estimates */
a=j(k,k,0);
L=k+1; /* form quadratics into matrix */
do i=1 to k;
do j=1 to i;
L=L+1;
a[i,j]=beta [L,];
end;
end;
a=(a+a`)*.5; /* symmetrize */
xx=-.5*solve(a,b); /* solve for critical value */
print , "Critical Factor Values" , xx;
/* Compute response at critical value */
yopt=c + b`*xx + xx`*a*xx;
print , "Response at Critical Value" yopt;
call eigen(eval,evec,a);
print , "Eigenvalues and Eigenvectors", eval, evec;
if min(eval)>0 then print , "Solution Was a Minimum";
if max(eval)<0 then print , "Solution Was a Maximum";
finish rsm;
146 F Chapter 9: General Statistics Examples

/* Sample Problem with Two Factors */


d={-1 -1, -1 0, -1 1,
0 -1, 0 0, 0 1,
1 -1, 1 0, 1 1};
y={ 71.7, 75.2, 76.3, 79.2, 81.5, 80.2, 80.1, 79.1, 75.8};
run rsm;

Running the module with the sample data produces the results shown in Output 9.7.1:

Output 9.7.1 Response Surface Regression: Results

Parameter Estimates

beta

81.222222
1.9666667
0.2166667
-3.933333
-2.225
-1.383333

Critical Factor Values

xx

0.2949376
-0.158881

yopt

Response at Critical Value 81.495032

Eigenvalues and Eigenvectors

eval

-0.96621
-4.350457

evec

-0.351076 0.9363469
0.9363469 0.3510761

Solution Was a Maximum


Example 9.8: Logistic and Probit Regression for Binary Response Models F 147

Example 9.8: Logistic and Probit Regression for Binary Response Models

A binary response Y is fit to a linear model according to

Pr.Y D 1/ D F .X/
Pr.Y D 0/ D 1 F .X/

where F is some smooth probability distribution function. The normal and logistic distribution functions
are supported. The method is maximum likelihood via iteratively reweighted least squares (described by
Charnes, Frome, and Yu (1976); Jennrich and Moore (1975); and Nelder and Wedderburn (1972)). The
row scaling is done by the derivative of the distribution (density). The weighting is done by w=p.1 p/,
where w has the counts or other weights. The following program calculates logistic and probit regression
for binary response models.

/* routine for estimating binary response models */


/* y is the binary response, x are regressors, */
/* wgt are count weights, */
/* model is choice of logit probit, */
/* parm has the names of the parameters */

proc iml ;

start binest;
b=repeat(0,ncol(x),1);
oldb=b+1; /* starting values */
do iter=1 to 20 while(max(abs(b-oldb))>1e-8);
oldb=b;
z=x*b;
run f;
loglik=sum(((y=1)#log(p) + (y=0)#log(1-p))#wgt);
btransp=b`;
print iter loglik btransp;
w=wgt/(p#(1-p));
xx=f#x;
xpxi=inv(xx`*(w#xx));
b=b + xpxi*(xx`*(w#(y-p)));
end;
p0=sum((y=1)#wgt)/sum(wgt); /* average response */
loglik0=sum(((y=1)#log(p0) + (y=0)#log(1-p0))#wgt);
chisq=(2#(loglik-loglik0));
df=ncol(x)-1;
prob=1-probchi(chisq,df);

print ,
'Likelihood Ratio, Intercept-only Model' chisq df prob,;

stderr=sqrt(vecdiag(xpxi));
tratio=b/stderr;
print parm b stderr tratio,,;
finish;
148 F Chapter 9: General Statistics Examples

/*---routine to yield distribution function and density---*/


start f;
if model='LOGIT' then
do;
p=1/(1+exp(-z));
f=p#p#exp(-z);
end;
if model='PROBIT' then
do;
p=probnorm(z);
f=exp(-z#z/2)/sqrt(8*atan(1));
end;
finish;

/* Ingot data from COX (1970, pp. 67-68)*/


data={ 7 1.0 0 10, 14 1.0 0 31, 27 1.0 1 56, 51 1.0 3 13,
7 1.7 0 17, 14 1.7 0 43, 27 1.7 4 44, 51 1.7 0 1,
7 2.2 0 7, 14 2.2 2 33, 27 2.2 0 21, 51 2.2 0 1,
7 2.8 0 12, 14 2.8 0 31, 27 2.8 1 22,
7 4.0 0 9, 14 4.0 0 19, 27 4.0 1 16, 51 4.0 0 1};
nready=data[,3];
ntotal=data[,4];
n=nrow(data);
x=repeat(1,n,1)||(data[,{1 2}]); /* intercept, heat, soak */
x=x//x; /* regressors */
y=repeat(1,n,1)//repeat(0,n,1); /* binary response */
wgt=nready//(ntotal-nready); /* row weights */
parm={intercept, heat, soak}; /* names of regressors */

model={logit};
run binest; /* run logit model */

model={probit};
run binest; /* run probit model */

The results are shown in Output 9.8.1.

Output 9.8.1 Logistic and Probit Regression: Results

iter loglik btransp

1 -268.248 0 0 0

iter loglik btransp

2 -76.29481 -2.159406 0.0138784 0.0037327

iter loglik btransp

3 -53.38033 -3.53344 0.0363154 0.0119734


Example 9.8: Logistic and Probit Regression for Binary Response Models F 149

Output 9.8.1 continued

iter loglik btransp

4 -48.34609 -4.748899 0.0640013 0.0299201

iter loglik btransp

5 -47.69191 -5.413817 0.0790272 0.04982

iter loglik btransp

6 -47.67283 -5.553931 0.0819276 0.0564395

iter loglik btransp

7 -47.67281 -5.55916 0.0820307 0.0567708

iter loglik btransp

8 -47.67281 -5.559166 0.0820308 0.0567713

chisq df prob

Likelihood Ratio, Intercept-only Model 11.64282 2 0.0029634

parm b stderr tratio

INTERCEPT -5.559166 1.1196947 -4.964895


HEAT 0.0820308 0.0237345 3.4561866
SOAK 0.0567713 0.3312131 0.1714042

iter loglik btransp

1 -268.248 0 0 0

iter loglik btransp

2 -71.71043 -1.353207 0.008697 0.0023391

iter loglik btransp

3 -51.64122 -2.053504 0.0202739 0.0073888

iter loglik btransp

4 -47.88947 -2.581302 0.032626 0.018503

iter loglik btransp

5 -47.48924 -2.838938 0.0387625 0.0309099

iter loglik btransp

6 -47.47997 -2.890129 0.0398894 0.0356507


150 F Chapter 9: General Statistics Examples

Output 9.8.1 continued

iter loglik btransp

7 -47.47995 -2.89327 0.0399529 0.0362166

iter loglik btransp

8 -47.47995 -2.893408 0.0399553 0.0362518

iter loglik btransp

9 -47.47995 -2.893415 0.0399554 0.0362537

iter loglik btransp

10 -47.47995 -2.893415 0.0399555 0.0362538

iter loglik btransp

11 -47.47995 -2.893415 0.0399555 0.0362538

chisq df prob

Likelihood Ratio, Intercept-only Model 12.028543 2 0.0024436

parm b stderr tratio

INTERCEPT -2.893415 0.5006009 -5.779884


HEAT 0.0399555 0.0118466 3.3727357
SOAK 0.0362538 0.1467431 0.2470561

parm b stderr tratio

INTERCEPT -2.893415 0.5006009 -5.779884


HEAT 0.0399555 0.0118466 3.3727357
SOAK 0.0362538 0.1467431 0.2470561

Example 9.9: Linear Programming

The two-phase method for linear programming can be used to solve the problem

max c0 x
st. Ax ; D;  b
x0

A SAS/IML routine that solves this problem follows. The approach appends slack, surplus, and artificial
variables to the model where needed. It then solves phase 1 to find a primal feasible solution. If a primal
Example 9.9: Linear Programming F 151

feasible solution exists and is found, the routine then goes on to phase 2 to find an optimal solution, if one
exists. The routine is general enough to handle minimizations as well as maximizations.

/* Subroutine to solve Linear Programs */


/* names: names of the decision variables */
/* obj: coefficients of the objective function */
/* maxormin: the value 'MAX' or 'MIN', upper or lowercase */
/* coef: coefficients of the constraints */
/* rel: character array of values: '<=' or '>=' or '=' */
/* rhs: right-hand side of constraints */
/* activity: returns the optimal value of decision variables*/
/* */
*ods trace output;
proc iml;
start linprog( names, obj, maxormin, coef, rel, rhs, activity);

bound=1.0e10;
m=nrow(coef);
n=ncol(coef);

/* Convert to maximization */
if upcase(maxormin)='MIN' then o=-1;
else o=1;

/* Build logical variables */


rev=(rhs<0);
adj=(-1*rev)+^ rev;
ge =(( rel = '>=' ) & ^rev) | (( rel = '<=' ) & rev);
eq=(rel='=');
if max(ge)=1 then
do;
sr=I(m);
logicals=-sr[,loc(ge)]||I(m);
artobj=repeat(0,1,ncol(logicals)-m)||(eq+ge)`;
end;
else do;
logicals=I(m);
artobj=eq`;
end;
nl=ncol(logicals);
nv=n+nl+2;

/* Build coef matrix */


* ods trace output;
proc iml;
a=((o*obj)||repeat(0,1,nl)||{ -1 0 })//
(repeat(0,1,n)||-artobj||{ 0 -1 })//
((adj#coef)||logicals||repeat(0,m,2));

/* rhs, lower bounds, and basis */


b={0,0}//(adj#rhs);
L=repeat(0,1,nv-2)||-bound||-bound;
basis=nv-(0:nv-1);
152 F Chapter 9: General Statistics Examples

/* Phase 1 - primal feasibility */


call lp(rc,x,y,a,b,nv,,l,basis);
print ( { ' ',
'**********Primal infeasible problem************',
' ',
'*********Numerically unstable problem**********',
'*********Singular basis encountered************',
'*******Solution is numerically unstable********',
'***Subroutine could not obtain enough memory***',
'**********Number of iterations exceeded********'
}[rc+1]);
if x[nv] ^=0 then
do;
print '**********Primal infeasible problem************';
stop;
end;
if rc>0 then stop;

/* phase 2 - dual feasibility */


u=repeat(.,1,nv-2)||{ . 0 };
L=repeat(0,1,nv-2)||-bound||0;
call lp(rc,x,y,a,b,nv-1,u,l,basis);

/* Report the solution */


print ( { '*************Solution is optimal***************',
'*********Numerically unstable problem**********',
'**************Unbounded problem****************',
'*******Solution is numerically unstable********',
'*********Singular basis encountered************',
'*******Solution is numerically unstable********',
'***Subroutine could not obtain enough memory***',
'**********Number of iterations exceeded********'
}[rc+1]);
value=o*x [nv-1];
print ,'Objective Value ' value;
activity= x [1:n] ;
print ,'Decision Variables ' activity[r=names];
lhs=coef*x[1:n];
dual=y[3:m+2];
print ,'Constraints ' lhs rel rhs dual,
'***********************************************';

finish;

Consider the following product mix example (Hadley 1962). A shop with three machines, A, B, and C, turns
out products 1, 2, 3, and 4. Each product must be processed on each of the three machines (for example,
lathes, drills, and milling machines). The following table shows the number of hours required by each
product on each machine:
Example 9.9: Linear Programming F 153

Product
Machine 1 2 3 4
A 1.5 1 2.4 1
B 1 5 1 3.5
C 1.5 3 3.5 1

The weekly time available on each of the machines is 2000, 8000, and 5000 hours, respectively. The
products contribute 5.24, 7.30, 8.34, and 4.18 to profit, respectively. What mixture of products can be
manufactured that maximizes profit? You can solve the problem as follows:

names={'product 1' 'product 2' 'product 3' 'product 4'};


profit={ 5.24 7.30 8.34 4.18};
tech={ 1.5 1 2.4 1 ,
1 5 1 3.5 ,
1.5 3 3.5 1 };
time={ 2000, 8000, 5000};
rel={ '<=', '<=', '<=' };
run linprog(names,profit,'max',tech,rel,time,products);

The results from this example are shown in Output 9.9.1.

Output 9.9.1 Product Mix: Optimal Solution

*************Solution is optimal***************

value

Objective Value 12737.059

activity

Decision Variables product 1 294.11765


product 2 1500
product 3 0
product 4 58.823529

lhs rel rhs dual

Constraints 2000 <= 2000 1.9535294


8000 <= 8000 0.2423529
5000 <= 5000 1.3782353

***********************************************

The following example shows how to find the minimum cost flow through a network by using linear pro-
gramming. The arcs are defined by an array of tuples; each tuple names a new arc. The elements in the arc
tuples give the names of the tail and head nodes that define the arc. The following data are needed: arcs,
cost for a unit of flow across the arcs, nodes, and supply and demand at each node.
154 F Chapter 9: General Statistics Examples

The following program generates the node-arc incidence matrix and calls the linear program routine for
solution:

arcs={ 'ab' 'bd' 'ad' 'bc' 'ce' 'de' 'ae' };


cost={ 1 2 4 3 3 2 9 };
nodes={ 'a', 'b', 'c', 'd', 'e'};
supdem={ 2, 0, 0, -1, -1 };
rel=repeat('=',nrow(nodes),1);
inode=substr(arcs,1,1);
onode=substr(arcs,2,1);
free n_a_i n_a_o;
do i=1 to ncol(arcs);
n_a_i=n_a_i || (inode[i]=nodes);
n_a_o=n_a_o || (onode[i]=nodes);
end;
n_a=n_a_i - n_a_o;
run linprog(arcs,cost,'min',n_a,rel,supdem,x);

The solution is shown in Output 9.9.2.

Output 9.9.2 Minimum Cost Flow: Optimal Solution

*************Solution is optimal***************

value

Objective Value 8

activity

Decision Variables ab 2
bd 2
ad 0
bc 0
ce 0
de 1
ae 0

lhs rel rhs dual

Constraints 2 = 2 -2.5
0 = 0 -1.5
0 = 0 -0.5
-1 = -1 -0.5
-1 = -1 -2.5

***********************************************
Example 9.10: Quadratic Programming F 155

Example 9.10: Quadratic Programming

The quadratic program

min c0 x C x0 Hx=2
st. Gx ; D;  b
x0

can be solved by solving an equivalent linear complementarity problem when H is positive semidefinite.
The approach is outlined in the discussion of the LCP subroutine.
The following routine solves the quadratic problem.

/* Routine to solve quadratic programs */


/* names: the names of the decision variables */
/* c: vector of linear coefficients of the objective function */
/* H: matrix of quadratic terms in the objective function */
/* G: matrix of constraint coefficients */
/* rel: character array of values: '<=' or '>=' or '=' */
/* b: right-hand side of constraints */
/* activity: returns the optimal value of decision variables */

start qp( names, c, H, G, rel, b, activity);


if min(eigval(h))<0 then
do;
print
'ERROR: The minimum eigenvalue of the H matrix is negative. ';
print ' Thus it is not positive semidefinite. ';
print ' QP is terminating with this error. ';
stop;
end;
nr=nrow(G);
nc=ncol(G);

/* Put in canonical form */


rev=(rel='<=');
adj=(-1 * rev) + ^rev;
g=adj# G; b = adj # b;
eq=( rel = '=' );
if max(eq)=1 then
do;
g=g // -(diag(eq)*G)[loc(eq),];
b=b // -(diag(eq)*b)[loc(eq)];
end;
m=(h || -g`) //(g || j(nrow(g),nrow(g),0));
q=c // -b;

/* Solve the problem */


call lcp(rc,w,z,M,q);
156 F Chapter 9: General Statistics Examples

/* Report the solution */


reset noname;
print ( { '*************Solution is optimal***************',
'*********No solution possible******************',
' ',
' ',
' ',
'**********Solution is numerically unstable*****',
'***********Not enough memory*******************',
'**********Number of iterations exceeded********'}[rc+1]);
reset name;
activity=z[1:nc];
objval=c`*activity + activity`*H*activity/2;
print ,'Objective Value ' objval,
'Decision Variables ' activity[r=names],
'***********************************************';

finish qp;

As an example, consider the following problem in portfolio selection. Models used in selecting investment
portfolios include assessment of the proposed portfolios expected gain and its associated risk. One such
model seeks to minimize the variance of the portfolio subject to a minimum expected gain. This can be
modeled as a quadratic program in which the decision variables are the proportions to invest in each of the
possible securities. The quadratic component of the objective function is the covariance of gain between the
securities; the first constraint is a proportionality constraint; and the second constraint gives the minimum
acceptable expected gain.
The following data are used to illustrate the model and its solution:

c = { 0, 0, 0, 0 };
h = { 1003.1 4.3 6.3 5.9 ,
4.3 2.2 2.1 3.9 ,
6.3 2.1 3.5 4.8 ,
5.9 3.9 4.8 10 };
g = { 1 1 1 1 ,
.17 .11 .10 .18 };
b = { 1 , .10 };
rel = { '=', '>='};
names = {'ibm', 'dec', 'dg', 'prime' };
run qp(names,c,h,g,rel,b,activity);

The results in Output 9.10.1 show that the minimum variance portfolio achieving the 0.10 expected gain is
composed of DEC and DG stock in proportions of 0.933 and 0.067.

Output 9.10.1 Portfolio Selection: Optimal Solution

*************Solution is optimal***************

objval

Objective Value 1.0966667


Example 9.11: Regression Quantiles F 157

Output 9.10.1 continued

activity

Decision Variables ibm 0


dec 0.9333333
dg 0.0666667
prime 0

***********************************************

Example 9.11: Regression Quantiles

The technique of estimating parameters in linear models by using the notion of regression quantiles is a
generalization of the LAE or LAV least absolute value estimation technique. For a given quantile q, the
estimate b of in the model

Y D X C

is the value of b that minimizes


X X
qjyt xt bj .1 q/jyt xt bj
t 2T t 2S

where T D ft jyt  xt bg and S D ft jyt  xt g. For q D 0:5, the solution b is identical to the estimates
produced by the LAE. The following routine finds this estimate by using linear programming.

/* Routine to find regression quantiles */


/* yname: name of dependent variable */
/* y: dependent variable */
/* xname: names of independent variables */
/* X: independent variables */
/* b: estimates */
/* predict: predicted values */
/* error: difference of y and predicted. */
/* q: quantile */
/* */
/* notes: This subroutine finds the estimates b */
/* that minimize */
/* */
/* q * (y - Xb) * e + (1-q) * (y - Xb) * ^e */
/* */
/* where e = ( Xb <= y ). */
/* */
/* This subroutine follows the approach given in: */
/* */
/* Koenker, R. and G. Bassett (1978). Regression */
/* quantiles. Econometrica. Vol. 46. No. 1. 33-50. */
/* */
/* Basssett, G. and R. Koenker (1982). An empirical */
158 F Chapter 9: General Statistics Examples

/* quantile function for linear models with iid errors. */


/* JASA. Vol. 77. No. 378. 407-415. */
/* */
/* When q = .5 this is equivalent to minimizing the sum */
/* of the absolute deviations, which is also known as */
/* L1 regression. Note that for L1 regression, a faster */
/* and more accurate algorithm is available in the SAS/IML */
/* routine LAV, which is based on the approach given in: */
/* */
/* Madsen, K. and Nielsen, H. B. (1993). A finite */
/* smoothing algorithm for linear L1 estimation. */
/* SIAM J. Optimization, Vol. 3. 223-235. */
/*---------------------------------------------------------*/
start rq( yname, y, xname, X, b, predict, error, q);
bound=1.0e10;
coef = X`;
m = nrow(coef);
n = ncol(coef);

/*-----------------build rhs and bounds--------------------*/


e = repeat(1,1,n)`;
r = {0 0} || ((1-q)*coef*e)`;
sign = repeat(1,1,m);

do i=1 to m;
if r[2+i] < 0 then do;
sign[i] = -1;
r[2+i] = -r[2+i];
coef[i,] = -coef[i,];
end;
end;

l = repeat(0,1,n) || repeat(0,1,m) || -bound || -bound ;


u = repeat(1,1,n) || repeat(.,1,m) || { . . } ;

/*-------------build coefficient matrix and basis----------*/


a = ( y` || repeat(0,1,m) || { -1 0 } ) //
( repeat(0,1,n) || repeat(-1,1,m) || { 0 -1 } ) //
( coef || I(m) || repeat(0,m,2)) ;
basis = n+m+2 - (0:n+m+1);

/*----------------find a feasible solution-----------------*/


call lp(rc,p,d,a,r,,u,l,basis);

/*----------------find the optimal solution----------------*/


l = repeat(0,1,n) || repeat(0,1,m) || -bound || {0} ;
u = repeat(1,1,n) || repeat(0,1,m) || { . 0 } ;
call lp(rc,p,d,a,r,n+m+1,u,l,basis);

/*---------------- report the solution-----------------------*/


variable = xname`; b=d[3:m+2];
do i=1 to m;
b[i] = b[i] * sign[i];
Example 9.11: Regression Quantiles F 159

end;
predict = X*b;
error = y - predict;
wsum = sum ( choose(error<0 , (q-1)*error , q*error) );

print ,,'Regression Quantile Estimation' ,


'Dependent Variable: ' yname ,
'Regression Quantile: ' q ,
'Number of Observations: ' n ,
'Sum of Weighted Absolute Errors: ' wsum ,
variable b,
X y predict error;
finish rq;

The following example uses data on the United States population from 1790 to 1970:

z = { 3.929 1790 ,
5.308 1800 ,
7.239 1810 ,
9.638 1820 ,
12.866 1830 ,
17.069 1840 ,
23.191 1850 ,
31.443 1860 ,
39.818 1870 ,
50.155 1880 ,
62.947 1890 ,
75.994 1900 ,
91.972 1910 ,
105.710 1920 ,
122.775 1930 ,
131.669 1940 ,
151.325 1950 ,
179.323 1960 ,
203.211 1970 };

y=z[,1];
x=repeat(1,19,1)||z[,2]||z[,2]##2;
run rq('pop',y,{'intercpt' 'year' 'yearsq'},x,b1,pred,resid,.5);

The results are shown in Output 9.11.1.

Output 9.11.1 Regression Quantiles: Results

Regression Quantile Estimation

yname

Dependent Variable: pop

Regression Quantile: 0.5


160 F Chapter 9: General Statistics Examples

Output 9.11.1 continued

Number of Observations: 19

wsum

Sum of Weighted Absolute Errors: 14.826429

variable b

intercpt 21132.758
year -23.52574
yearsq 0.006549

X y predict error

1 1790 3204100 3.929 5.4549176 -1.525918


1 1800 3240000 5.308 5.308 -4.54E-12
1 1810 3276100 7.239 6.4708902 0.7681098
1 1820 3312400 9.638 8.9435882 0.6944118
1 1830 3348900 12.866 12.726094 0.1399059
1 1840 3385600 17.069 17.818408 -0.749408
1 1850 3422500 23.191 24.220529 -1.029529
1 1860 3459600 31.443 31.932459 -0.489459
1 1870 3496900 39.818 40.954196 -1.136196
1 1880 3534400 50.155 51.285741 -1.130741
1 1890 3572100 62.947 62.927094 0.0199059
1 1900 3610000 75.994 75.878255 0.1157451
1 1910 3648100 91.972 90.139224 1.8327765
1 1920 3686400 105.71 105.71 8.669E-13
1 1930 3724900 122.775 122.59058 0.1844157
1 1940 3763600 131.669 140.78098 -9.111976
1 1950 3802500 151.325 160.28118 -8.956176
1 1960 3841600 179.323 181.09118 -1.768184
1 1970 3880900 203.211 203.211 -2.96E-12

The L1 norm (when q D 0:5) tends to cause the fit to be better at more points at the expense of causing
some points to fit worse, as shown by the following plot, which compares the L1 residuals with the least
squares residuals.
Example 9.12: Simulations of a Univariate ARMA Process F 161

Output 9.11.2 L1 Residuals vs. Least Squares Residuals

When q D 0:5, the results of this module can be compared with the results of the LAV routine, as follows:

b0 = {1 1 1}; /* initial value */


optn = j(4,1,.); /* options vector */

optn[1]= .; /* gamma default */


optn[2]= 5; /* print all */
optn[3]= 0; /* McKean-Schradar variance */
optn[4]= 1; /* convergence test */

call LAV(rc, xr, x, y, b0, optn);

Example 9.12: Simulations of a Univariate ARMA Process

Simulations of time series with known ARMA structure are often needed as part of other simulations or
as learning data sets for developing time series analysis skills. The following program generates a time
162 F Chapter 9: General Statistics Examples

series by using the IML functions NORMAL, ARMACOV, HANKEL, PRODUCT, RATIO, TOEPLITZ,
and ROOT.

proc iml;
reset noname;
start armasim(y,n,phi,theta,seed);
/*-----------------------------------------------------------*/
/* IML Module: armasim */
/* Purpose: Simulate n data points from ARMA process */
/* exact covariance method */
/* Arguments: */
/* */
/* Input: n : series length */
/* phi : AR coefficients */
/* theta: MA coefficients */
/* seed : integer seed for normal deviate generator */
/* Output: y: realization of ARMA process */
/* ----------------------------------------------------------*/

p=ncol(phi)-1;
q=ncol(theta)-1;
y=normal(j(1,n+q,seed));

/* Pure MA or white noise */


if p=0 then y=product(theta,y)[,(q+1):(n+q)];
else do; /* Pure AR or ARMA */

/* Get the autocovariance function */


call armacov(gamma,cov,ma,phi,theta,p);
if gamma[1]<0 then
do;
print 'ARMA parameters not stable.';
print 'Execution terminating.';
stop;
end;

/* Form covariance matrix */


gamma=toeplitz(gamma);

/* Generate covariance between initial y and */


/* initial innovations */
if q>0 then
do;
psi=ratio(phi,theta,q);
psi=hankel(psi[,-((-q):(-1))]);
m=max(1,(q-p+1));
psi=psi[-((-q):(-m)),];
if p>q then psi=j(p-q,q,0)//psi;
gamma=(gamma||psi)//(psi`||i(q));
end;

/* Use Cholesky root to get startup values */


gamma=root(gamma);
startup=y[,1:(p+q)]*gamma;
Example 9.13: Parameter Estimation for a Regression Model with ARMA Errors F 163

e=y[,(p+q+1):(n+q)];

/* Generate MA part */
if q>0 then
do;
e=startup[,(p+1):(p+q)]||e;
e=product(theta,e)[,(q+1):(n+q-p)];
end;

y=startup[,1:p];
phi1=phi[,-(-(p+1):(-2))]`;

/* Use difference equation to generate */


/* remaining values */
do ii=1 to n-p;
y=y||(e[,ii]-y[,ii:(ii+p-1)]*phi1);
end;
end;
y=y`;
finish armasim; /* ARMASIM */

run armasim(y,10,{1 -0.8},{1 0.5},1234321);


print ,'Simulated Series:', y;

The results are shown in Output 9.12.1.

Output 9.12.1 Simulated Series

Simulated Series:

3.0764594
1.8931735
0.9527984
0.0892395
-1.811471
-2.8063
-2.52739
-2.865251
-1.332334
0.1049046

Example 9.13: Parameter Estimation for a Regression Model with ARMA


Errors

Nonlinear estimation algorithms are required for obtaining estimates of the parameters of a regression model
with innovations having an ARMA structure. The three estimation methods employed by the ARIMA
procedure in SAS/ETS software are written in IML in the following program. The algorithms employed
are slightly different from those used by PROC ARIMA, but the results obtained should be similar. This
example combines the IML functions ARMALIK, PRODUCT, and RATIO to perform the estimation. Note
164 F Chapter 9: General Statistics Examples

the interactive nature of this example, illustrating how you can adjust the estimates when they venture
outside the stationary or invertible regions.

/*-------------------------------------------------------------*/
/*---- Grunfeld's Investment Models Fit with ARMA Errors ----*/
/*-------------------------------------------------------------*/
data grunfeld;
input year gei gef gec wi wf wc;
label gei='gross investment ge'
gec='capital stock lagged ge'
gef='value of outstanding shares ge lagged'
wi ='gross investment w'
wc ='capital stock lagged w'
wf ='value of outstanding shares lagged w';
/*--- GE STANDS FOR GENERAL ELECTRIC AND W FOR WESTINGHOUSE ---*/
datalines;
1935 33.1 1170.6 97.8 12.93 191.5 1.8
1936 45.0 2015.8 104.4 25.90 516.0 .8
1937 77.2 2803.3 118.0 35.05 729.0 7.4
1938 44.6 2039.7 156.2 22.89 560.4 18.1
1939 48.1 2256.2 172.6 18.84 519.9 23.5
1940 74.4 2132.2 186.6 28.57 628.5 26.5
1941 113.0 1834.1 220.9 48.51 537.1 36.2
1942 91.9 1588.0 287.8 43.34 561.2 60.8
1943 61.3 1749.4 319.9 37.02 617.2 84.4
1944 56.8 1687.2 321.3 37.81 626.7 91.2
1945 93.6 2007.7 319.6 39.27 737.2 92.4
1946 159.9 2208.3 346.0 53.46 760.5 86.0
1947 147.2 1656.7 456.4 55.56 581.4 111.1
1948 146.3 1604.4 543.4 49.56 662.3 130.6
1949 98.3 1431.8 618.3 32.04 583.8 141.8
1950 93.5 1610.5 647.4 32.24 635.2 136.7
1951 135.2 1819.4 671.3 54.38 723.8 129.7
1952 157.3 2079.7 726.1 71.78 864.1 145.5
1953 179.5 2371.6 800.3 90.08 1193.5 174.8
1954 189.6 2759.9 888.9 68.60 1188.9 213.5
;
run;

proc iml;
reset noname;
/*-----------------------------------------------------------*/
/* name: ARMAREG Modules */
/* purpose: Perform Estimation for regression model with */
/* ARMA errors */
/* usage: Before invoking the command */
/* */
/* run armareg; */
/* */
/* define the global parameters */
/* */
/* x - matrix of predictors. */
/* y - response vector. */
/* iphi - defines indices of nonzero AR parameters, */
Example 9.13: Parameter Estimation for a Regression Model with ARMA Errors F 165

/* omit the index 0 which corresponds to the zero */


/* order constant one. */
/* itheta - defines indices of nonzero MA parameters, */
/* omit the index 0 which corresponds to the zero */
/* order constant one. */
/* ml - estimation option: -1 if Conditional Least */
/* Squares, 1 if Maximum Likelihood, otherwise */
/* Unconditional Least Squares. */
/* delta - step change in parameters (default 0.005). */
/* par - initial values of parms. First ncol(iphi) */
/* values correspond to AR parms, next ncol(itheta)*/
/* values correspond to MA parms, and remaining */
/* are regression coefficients. */
/* init - undefined or zero for first call to ARMAREG. */
/* maxit - maximum number of iterations. No other */
/* convergence criterion is used. You can invoke */
/* ARMAREG without changing parameter values to */
/* continue iterations. */
/* nopr - undefined or zero implies no printing of */
/* intermediate results. */
/* */
/* notes: Optimization using Gauss-Newton iterations */
/* */
/* No checking for invertibility or stationarity during */
/* estimation process. The parameter array par can be */
/* modified after running armareg to place estimates */
/* in the stationary and invertible regions, and then */
/* armareg can be run again. If a nonstationary AR operator */
/* is employed, a PAUSE will occur after calling ARMALIK */
/* because of a detected singularity. Using STOP will */
/* permit termination of ARMAREG so that the AR */
/* coefficients can be modified. */
/* */
/* T-ratios are only approximate and can be undependable, */
/* especially for small series. */
/* */
/* The notation follows that of the IML function ARMALIK; */
/* the autoregressive and moving average coefficients have */
/* signs opposite those given by PROC ARIMA. */

/* Begin ARMA estimation modules */

/* Generate residuals */
start gres;
noise=y-x*beta;
previous=noise[:];
if ml=-1 then do; /* Conditional LS */
noise=j(nrow(y),1,previous)//noise;
resid=product(phi,noise`)[,nrow(y)+1:nrow(noise)];
resid=ratio(theta,resid,ncol(resid));
resid=resid[,1:ncol(resid)]`;
end;
else do; /* Maximum likelihood */
free l;
166 F Chapter 9: General Statistics Examples

call armalik(l,resid,std,noise,phi,theta);

/* Nonstationary condition produces PAUSE */


if nrow(l)=0 then do;
print ,
'In GRES: Parameter estimates outside stationary region.';
end;
else do;
temp=l[3,]/(2#nrow(resid));
if ml=1 then resid=resid#exp(temp);
end;
end;
finish gres; /* finish module GRES */

start getpar; /* get parameters */


if np=0 then phi=1;
else do;
temp=parm[,1:np];
phi=1||j(1,p,0);
phi[,iphi] =temp;
end;
if nq=0 then theta=1;
else do;
temp=parm[,np+1:np+nq];
theta=1||j(1,q,0);
theta[,itheta] =temp;
end;
beta=parm[,(np+nq+1):ncol(parm)]`;
finish getpar; /* finish module GETPAR */

/* Get SS Matrix - First Derivatives */


start getss;
parm=par;
run getpar;
run gres;
s=resid;
oldsse=ssq(resid);
do k=1 to ncol(par);
parm=par;
parm[,k]=parm[,k]+delta;
run getpar;
run gres;
s=s||((resid-s[,1])/delta); /* append derivatives */
end;
ss=s`*s;
if nopr^=0 then print ,'Gradient Matrix', ss;
sssave=ss;
do k=1 to 20; /* Iterate if no reduction in SSE */
do ii=2 to ncol(ss);
ss[ii,ii]=(1+lambda)*ss[ii,ii];
end;
ss=sweep(ss,2:ncol(ss)); /* Gaussian elimination */
delpar=ss[1,2:ncol(ss)]; /* update parm increments */
Example 9.13: Parameter Estimation for a Regression Model with ARMA Errors F 167

parm=par+delpar;
run getpar;
run gres;
sse=ssq(resid);
ss=sssave;
if sse<oldsse then do; /* reduction, no iteration */
lambda=max(lambda/10,1e-12);
k=21;
end;
else do; /* no reduction */
/* increase lambda and iterate */
if nopr^=0 then print ,
'Lambda=' lambda 'SSE=' sse 'OLDSSE=' oldsse,
'Gradient Matrix', ss ;
lambda=min(10*lambda,1e12);
if k=20 then do;
print 'In module GETSS:'
'No improvement in SSE after twenty iterations.';
print ' Possible Ridge Problem. ';
return;
end;
end;
end;
if nopr^=0 then print ,'Gradient Matrix', ss;
finish getss; /* Finish module GETSS */

start armareg; /* ARMAREG main module */


/* Initialize options and parameters */
if nrow(delta)=0 then delta=0.005;
if nrow(maxiter)=0 then maxiter=5;
if nrow(nopr)=0 then nopr=0;
if nrow(ml)=0 then ml=1;
if nrow(init)=0 then init=0;
if init=0 then do;
p=max(iphi);
q=max(itheta);
np=ncol(iphi);
nq=ncol(itheta);

/* Make indices one-based */


do k=1 to np;
iphi[,k]=iphi[,k]+1;
end;
do k=1 to nq;
itheta[,k]=itheta[,k]+1;
end;

/* Create row labels for Parameter estimates */


if p>0 then parmname = concat("AR",char(1:p,2));
if q>0 then parmname = parmname||concat("MA",char(1:q,2));
parmname = parmname||concat("B",char(1:ncol(x),2));

/* Create column labels for Parameter estimates */


pname = {"Estimate" "Std. Error" "T-Ratio"};
168 F Chapter 9: General Statistics Examples

init=1;
end;

/* Generate starting values */


if nrow(par)=0 then do;
beta=inv(x`*x)*x`*y;
if np+nq>0 then par=j(1,np+nq,0)||beta`;
else par=beta`;
end;
print ,'Parameter Starting Values',;
print par [colname=parmname]; /* stderr tratio */
lambda=1e-6; /* Controls step size */
do iter=1 to maxiter; /* Do maxiter iterations */
run getss;
par=par+delpar;
if nopr^=0 then do;
print ,'Parameter Update',;
print par [colname=parmname]; /* stderr tratio */
print ,'Lambda=' lambda,;
end;
end;

sighat=sqrt(sse/(nrow(y)-ncol(par)));
print ,'Innovation Standard Deviation:' sighat;
ss=sweep(ss,2:ncol(ss)); /* Gaussian elimination */
estm=par`||(sqrt(diag(ss[2:ncol(ss),2:ncol(ss)]))
*j(ncol(par),1,sighat));
estm=estm||(estm[,1] /estm[,2]);
if ml=1 then print ,'Maximum Likelihood Estimation Results',;
else if ml=-1 then print ,
'Conditional Least Squares Estimation Results',;
else print ,'Unconditional Least Squares Estimation Results',;
print estm [rowname=parmname colname=pname] ;
finish armareg;
/* End of ARMA Estimation modules */

/* Begin estimation for Grunfeld's investment models */


use grunfeld;
read all var {gei} into y;
read all var {gef gec} into x;
close grunfeld;

x=j(nrow(x),1,1)||x;
iphi=1;
itheta=1;
maxiter=10;
delta=0.0005;
ml=-1;
/*---- To prevent overflow, specify starting values ----*/
par={-0.5 0.5 -9.956306 0.0265512 0.1516939};
run armareg; /*---- Perform CLS estimation ----*/

The results are shown in Output 9.13.1.


Example 9.13: Parameter Estimation for a Regression Model with ARMA Errors F 169

Output 9.13.1 Conditional Least Squares Results

Parameter Starting Values

AR 1 MA 1 B 1 B 2 B 3

-0.5 0.5 -9.956306 0.0265512 0.1516939

In module GETSS: No improvement in SSE after twenty iterations.

Possible Ridge Problem.

In module GETSS: No improvement in SSE after twenty iterations.

Possible Ridge Problem.

In module GETSS: No improvement in SSE after twenty iterations.

Possible Ridge Problem.

Innovation Standard Deviation: 22.653769

Conditional Least Squares Estimation Results

Estimate Std. Error T-Ratio

AR 1 -0.230905 0.3429525 -0.673287


MA 1 0.69639 0.2480617 2.8073252
B 1 -20.87774 31.241368 -0.668272
B 2 0.038706 0.0167503 2.3107588
B 3 0.1216554 0.0441722 2.7541159

/*---- With CLS estimates as starting values, ----*/


/*---- perform ML estimation. ----*/
ml=1;
maxiter=10;
run armareg;

The results are shown in Output 9.13.2.

Output 9.13.2 Maximum Likelihood Results

Estimate Std. Error T-Ratio

AR 1 -0.230905 0.3429525 -0.673287


MA 1 0.69639 0.2480617 2.8073252
B 1 -20.87774 31.241368 -0.668272
B 2 0.038706 0.0167503 2.3107588
B 3 0.1216554 0.0441722 2.7541159

Parameter Starting Values


170 F Chapter 9: General Statistics Examples

Output 9.13.2 continued

AR 1 MA 1 B 1 B 2 B 3

-0.230905 0.69639 -20.87774 0.038706 0.1216554

Innovation Standard Deviation: 23.039253

Maximum Likelihood Estimation Results

Estimate Std. Error T-Ratio

AR 1 -0.196224 0.3510868 -0.558904


MA 1 0.6816033 0.2712043 2.5132468
B 1 -26.47514 33.752826 -0.784383
B 2 0.0392213 0.0165545 2.3692242
B 3 0.1310306 0.0425996 3.0758622

Example 9.14: Iterative Proportional Fitting

The classical use of iterative proportional fitting is to adjust frequencies to conform to new marginal totals.
Use the IPF subroutine to perform this kind of analysis. You supply a table that contains new margins and a
table that contains old frequencies. The IPF subroutine returns a table of adjusted frequencies that preserves
any higher-order interactions appearing in the initial table.
The following example is a census study that estimates a population distribution according to age and marital
status (Bishop, Fienberg, and Holland 1975). Estimates of the distribution are known for the previous year,
but only estimates of marginal totals are known for the current year. You want to adjust the distribution of
the previous year to fit the estimated marginal totals of the current year. Here is the program:

proc iml;

/* Stopping criteria */
mod={0.01 15};

/* Marital status has 3 levels. age has 8 levels. */


dim={3 8};

/* New marginal totals for age by marital status */


table={1412 0 0 ,
1402 0 0 ,
1174 276 0 ,
0 1541 0 ,
0 1681 0 ,
0 1532 0 ,
0 1662 0 ,
0 5010 2634};

/* Marginal totals are known for both */


/* marital status and age */
Example 9.14: Iterative Proportional Fitting F 171

config={1 2};

/* Use known distribution for start-up values */


initab={1306 83 0 ,
619 765 3 ,
263 1194 9 ,
173 1372 28 ,
171 1393 51 ,
159 1372 81 ,
208 1350 108 ,
1116 4100 2329};

call ipf(fit,status,dim,table,config,initab,mod);

c={' SINGLE' ' MARRIED' 'WIDOWED/DIVORCED'};


r={'15 - 19' '20 - 24' '25 - 29' '30 - 34' '35 - 39' '40 - 44'
'45 - 49' '50 OR OVER'};
print
'POPULATION DISTRIBUTION ACCORDING TO AGE AND MARITAL STATUS',,
'KNOWN DISTRIBUTION (PREVIOUS YEAR)',,
initab [colname=c rowname=r format=8.0] ,,
'ADJUSTED ESTIMATES OF DISTRIBUTION (CURRENT YEAR)',,
fit [colname=c rowname=r format=8.2] ;

The results are shown in Output 9.14.1.

Output 9.14.1 Iterative Proportional Fitting: Results

POPULATION DISTRIBUTION ACCORDING TO AGE AND MARITAL STATUS

KNOWN DISTRIBUTION (PREVIOUS YEAR)

initab
SINGLE MARRIED WIDOWED/DIVORCED

15 - 19 1306 83 0
20 - 24 619 765 3
25 - 29 263 1194 9
30 - 34 173 1372 28
35 - 39 171 1393 51
40 - 44 159 1372 81
45 - 49 208 1350 108
50 OR OVER 1116 4100 2329

ADJUSTED ESTIMATES OF DISTRIBUTION (CURRENT YEAR)


172 F Chapter 9: General Statistics Examples

Output 9.14.1 continued

fit
SINGLE MARRIED WIDOWED/DIVORCED

15 - 19 1325.27 86.73 0.00


20 - 24 615.56 783.39 3.05
25 - 29 253.94 1187.18 8.88
30 - 34 165.13 1348.55 27.32
35 - 39 173.41 1454.71 52.87
40 - 44 147.21 1308.12 76.67
45 - 49 202.33 1352.28 107.40
50 OR OVER 1105.16 4181.04 2357.81

Example 9.15: Full-Screen Nonlinear Regression

This example shows how to build a menu system that enables you to perform nonlinear regression from a
menu. Six modules are stored on an IML storage disk. After you have stored them, use this example to try
out the system. First, invoke IML and set up some sample data in memory, in this case the population of the
U.S. from 1790 to 1970. Then invoke the module NLIN, as follows:

reset storage='nlin';
load module=_all_;
uspop = {3929, 5308, 7239, 9638, 12866, 17069, 23191, 31443,
39818, 50155, 62947, 75994, 91972, 105710, 122775, 131669,
151325, 179323, 203211}/1000;
year=do(1790,1970,10)`;
time=year-1790;
print year time uspop;
run nlin;

A menu similar to the following menu appears. The entry fields are shown by underscores here, but the
underscores become blanks in the real session.

Nonlinear Regression
Response function: ______________________________________________
Predictor function: _____________________________________________

Parameter Value Derivative


: ________ ___________ __________________________________________
: ________ ___________ __________________________________________
: ________ ___________ __________________________________________
: ________ ___________ __________________________________________
: ________ ___________ __________________________________________
: ________ ___________ __________________________________________

Enter an exponential model and fill in the response and predictor expression fields. For each parameter,
enter the name, initial value, and derivative of the predictor with respect to the parameter. Here are the
populated fields:
Example 9.15: Full-Screen Nonlinear Regression F 173

Nonlinear Regression
Response function: uspop_________________________________________
Predictor function: a0*exp(a1*time)______________________________

Parameter Value Derivative


: a0______ ________3.9 exp(a1*time)_______________________________
: a1______ __________0 time*a0*exp(a1*time)_______________________
: ________ ___________ ___________________________________________
: ________ ___________ ___________________________________________
: ________ ___________ ___________________________________________
: ________ ___________ ___________________________________________

Now press the SUBMIT key. The model compiles, the iterations start blinking on the screen, and when the
model has converged, the estimates are displayed along with their standard errors, t test, and significance
probability.
To modify and rerun the model, submit the following command:

run nlrun;

Here is the program that defines and stores the modules of the system.

/* Full-Screen Nonlinear Regression */


/* Six modules are defined, which constitute a system for */
/* nonlinear regression. The interesting feature of this */
/* system is that the problem is entered in a menu, and both */
/* iterations and final results are displayed on the same */
/* menu. */
/* */
/* Run this source to get the modules stored. Examples */
/* of use are separate. */
/* */
/* Caution: this is a demonstration system only. It does not */
/* have all the necessary safeguards in it yet to */
/* recover from user errors or rough models. */
/* Algorithm: */
/* Gauss-Newton nonlinear regression with step-halving. */
/* Notes: program variables all start with nd or _ to */
/* minimize the problems that would occur if user variables */
/* interfered with the program variables. */

/* Gauss-Newton nonlinear regression with Hartley step-halving */

/*---Routine to set up display values for new problem---*/


start nlinit;
window nlin rows=15 columns=80 color='green'
msgline=_msg cmndline=_cmnd
group=title +30 'Nonlinear Regression' color='white'
group=model / @5 'Response function:' color='white'
+1 nddep $55. color='blue'
/ @5 'Predictor function:' color='white'
174 F Chapter 9: General Statistics Examples

+1 ndfun $55. color='blue'


group=parm0 // @5 'Parameter' color='white' @15 'Value'
@30 'Derivative'
group=parm1 // @5 'Parameter' color='white' @15 'Value'
group=parm2 // @5 'Parameter' color='white' @19 'Estimate'
@33 'Std Error'
@48 'T Ratio'
@62 'Prob>|T|'
group=parminit /@3 ':' color='white'
@5 ndparm $8. color='blue'
@15 ndbeta best12. @30 ndder $45.
group=parmiter / @5 _parm color='white'
@15 _beta best12. color='blue'
group=parmest / @5 _parm color='white'
@15 _beta best12. color='blue'
@30 _std best12.
@45 _t 10.4
@60 _prob 10.4
group=sse // @5 'Iteration =' color='white' _iter 5. color='blue'
' Stephalvings = ' color='white' _subit 3. color='blue'
/ @5 'Sum of Squares Error =' color='white' _sse best12.
color='blue';
nddep=cshape(' ',1,1,55,' ');
ndfun=nddep;
nd0=6;
ndparm=repeat(' ',nd0,1);
ndbeta=repeat(0,nd0,1);
ndder=cshape(' ',nd0,1,55,' ');
_msg='Enter New Nonlinear Problem';
finish nlinit; /* Finish module NLINIT */

/* Main routine */
start nlin;
run nlinit; /* initialization routine */
run nlrun; /* run routine */
finish nlin;

/* Routine to show each iteration */


start nliter;
display nlin.title noinput,
nlin.model noinput,
nlin.parm1 noinput,
nlin.parmiter repeat noinput,
nlin.sse noinput;
finish nliter;

/* Routine for one run */


start nlrun;
run nlgen; /* generate the model */
run nlest; /* estimate the model */
finish nlrun;

/* Routine to generate the model */


Example 9.15: Full-Screen Nonlinear Regression F 175

start nlgen;

/* Model definition menu */


display nlin.title, nlin.model, nlin.parm0, nlin.parminit repeat;

/* Get number of parameters */


t=loc(ndparm=' ');
if nrow(t)=0 then
do;
print 'no parameters';
stop;
end;
_k=t[1] -1;

/* Trim extra rows, and edit '*' to '#' */


_dep=nddep; call change(_dep,'*','#',0);
_fun=ndfun; call change(_fun,'*','#',0);
_parm=ndparm[1:_k,];
_beta=ndbeta[1:_k,];
_der=ndder [1:_k,];
call change(_der,'*','#',0);

/* Construct nlresid module to split up parameters and */


/* compute model */
call queue('start nlresid;');
do i=1 to _k;
call queue(_parm[i] ,"=_beta[",char(i,2),"] ;");
end;
call queue("_y = ",_dep,";",
"_p = ",_fun,";",
"_r = _y-_p;",
"_sse = ssq(_r);",
"finish;" );

/* Construct nlderiv function */


call queue('start nlderiv; _x = ');
do i=1 to _k;
call queue("(",_der[i] ,")#repeat(1,nobs,1)||");
end;
call queue(" nlnothin; finish;");

/* Pause to compile the functions */


call queue("resume;");
pause *;
finish nlgen; /* Finish module NLGEN */

/* Routine to do estimation */
start nlest;

/* Modified Gauss-Newton Nonlinear Regression */


/* _parm has parm names */
/* _beta has initial values for parameters */
/* _k is the number of parameters */
176 F Chapter 9: General Statistics Examples

/* after nlresid: */
/* _y has response, */
/* _p has predictor after call */
/* _r has residuals */
/* _sse has sse */
/* after nlderiv */
/* _x has jacobian */
/* */

eps=1;
_iter = 0;
_subit = 0;
_error = 0;
run nlresid; /* f, r, and sse for initial beta */
run nliter; /* print iteration zero */
nobs = nrow(_y);
_msg = 'Iterating';

/* Gauss-Newton iterations */
do _iter=1 to 30 while(eps>1e-8);
run nlderiv; /* subroutine for derivatives */
_lastsse=_sse;
_xpxi=sweep(_x`*_x);
_delta=_xpxi*_x`*_r; /* correction vector */
_old = _beta; /* save previous parameters */
_beta=_beta+_delta; /* apply the correction */
run nlresid; /* compute residual */
run nliter; /* print iteration in window */
eps=abs((_lastsse-_sse))/(_sse+1e-6);
/* convergence criterion */

/* Hartley subiterations */
do _subit=1 to 10 while(_sse>_lastsse);
_delta=_delta*.5; /* halve the correction vector */
_beta=_old+_delta; /* apply the halved correction */
run nlresid; /* find sse et al */
run nliter; /* print subiteration in window */
end;
if _subit>10 then
do;
_msg = "did not improve after 10 halvings";
eps=0; /* make it fall through iter loop */
end;
end;

/* print out results */


_msg = ' ';
if _iter>30 then
do;
_error=1;
_msg = 'convergence failed';
end;
_iter=_iter-1;
References F 177

_dfe = nobs-_k;
_mse = _sse/_dfe;
_std = sqrt(vecdiag(_xpxi)#_mse);
_t = _beta/_std;
_prob= 1-probf(_t#_t,1,_dfe);
display nlin.title noinput,
nlin.model noinput,
nlin.parm2 noinput,
nlin.parmest repeat noinput,
nlin.sse noinput;
finish nlest; /* Finish module NLEST */

/* Store the modules to run later */


reset storage='nlin';
store module=_all_;

References
Bishop, Y. M. M., Fienberg, S. E., and Holland, P. W. (1975), Discrete Multivariate Analysis: Theory and
Practice, Cambridge, MA: MIT Press.

Charnes, A., Frome, E. L., and Yu, P. L. (1976), The Equivalence of Generalized Least Squares and Maxi-
mum Likelihood Estimation in the Exponential Family, Journal of the American Statistical Association,
71, 169172.

Cox, D. R. (1970), Analysis of Binary Data, London: Metheun.

Grizzle, J. E., Starmer, C. F., and Koch, G. G. (1969), Analysis of Categorical Data by Linear Models,
Biometrics, 25, 489504.

Hadley, G. (1962), Linear Programming, Reading, MA: Addison-Wesley.

Jennrich, R. I. and Moore, R. H. (1975), Maximum Likelihood Estimation by Means of Nonlinear Least
Squares, American Statistical Association.

Kaiser, H. F. and Caffrey, J. (1965), Alpha Factor Analysis, Psychometrika, 30, 114.

Kastenbaum, M. A. and Lamphiear, D. E. (1959), Calculation of Chi-Square to Test the No Three-Factor


Interaction Hypothesis, Biometrics, 15, 107122.

Nelder, J. A. and Wedderburn, R. W. M. (1972), Generalized Linear Models, Journal of the Royal Statis-
tical Society, Series A, 135, 370384.
178
Chapter 10

Submitting SAS Statements

Contents
Introduction to Submitting SAS Statements . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Calling a Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
Passing Parameters from SAS/IML Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 182
Details of Parameter Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Creating Graphics in a SUBMIT Block . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Handling Errors in a SUBMIT Block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Differences between the SUBMIT Statement in PROC IML and in SAS/IML Studio . . . . 188

Introduction to Submitting SAS Statements

In 2002, the IML Workshop application (now known as SAS/IML Studio) introduced a mechanism for sub-
mitting SAS statements from programs written in the IMLPlus language. As of SAS/IML 9.22, this feature
is also available in PROC IML. This chapter shows you how to submit SAS statements from PROC IML by
using the SUBMIT and ENDSUBMIT statements. By using these statements, SAS/IML programmers can
call any SAS procedure without losing the state of their PROC IML session.
The statements between the SUBMIT and the ENDSUBMIT statements are referred to as a SUBMIT block.
The SUBMIT block is processed by the SAS language processor. You can use the SUBMIT statement to
call DATA steps, macros, and SAS procedures.
This chapter covers the following topics:

 calling a SAS procedure from PROC IML

 passing parameters into the SUBMIT block

 creating ODS graphics in a SUBMIT block

 handling errors in the SUBMIT block

 differences between the SUBMIT statement in PROC IML and the IMLPlus statement of the same
name as it is implemented in SAS/IML Studio
180 F Chapter 10: Submitting SAS Statements

Calling a Procedure

This section describes how to call a procedure from PROC IML.


Suppose you have data in a SAS/IML matrix that you want to analyze by using a statistical procedure. In
general, you can use the following steps to analyze the data:

1. Write the data to a SAS data set by using the CREATE and APPEND statements.

2. Use the SUBMIT statement to call a SAS procedure that analyzes the data.

3. Read the results of the analysis into SAS/IML matrices by using the USE and READ statements.

4. Use the results in further computations.

Of course, if the data are already in a SAS data set, you can skip the first step. Similarly, if you are solely
interested in the printed output from a procedure, you can skip the third and fourth steps.
The following example calls the UNIVARIATE procedure in Base SAS software to compute a regression
analysis. In order to tell the SAS/IML language interpreter that you want certain statements to be sent to
the SAS System, you must enclose your SAS statements with SUBMIT and ENDSUBMIT statements. The
ENDSUBMIT statement must appear on a line by itself.

1 The following statements create a SAS data set from data in a vector:

proc iml;
q = {3.7, 7.1, 2, 4.2, 5.3, 6.4, 8, 5.7, 3.1, 6.1, 4.4, 5.4, 9.5, 11.2};

create MyData var {q};


append;
close MyData;

The MyData data set is used in the rest of this chapter.

2 You can call the UNIVARIATE procedure to analyze these data. The following statements use the ODS
SELECT statement to limit the output from the UNIVARIATE procedure. The output is shown in Fig-
ure 10.1.

submit;
ods select Moments;
proc univariate data=MyData;
var q;
ods output Moments=Moments;
run;
endsubmit;
Calling a Procedure F 181

Figure 10.1 Output from the UNIVARIATE Procedure

The UNIVARIATE Procedure


Variable: Q

Moments

N 14 Sum Weights 14
Mean 5.86428571 Sum Observations 82.1
Std Deviation 2.49387161 Variance 6.2193956
Skewness 0.66401924 Kurtosis 0.34860956
Uncorrected SS 562.31 Corrected SS 80.8521429
Coeff Variation 42.5264343 Std Error Mean 0.66651522

3 The previous statements also used the ODS OUTPUT statement to create a data set named Moments that
contains the statistics shown in Figure 10.1. In the data set, the first column of Figure 10.1 is contained in
a variable named Label1 and the second column is contained in a variable named nValue1. The following
statements read those variables into SAS/IML vectors of the same names and print the values:

use Moments;
read all var {"nValue1" "Label1"};
close Moments;

labl = "Statistics for " + name(q);


print nValue1[rowname=Label1 label=labl];

Figure 10.2 Statistics Read into SAS/IML Vectors

Statistics for q

N 14
Mean 5.8642857
Std Deviation 2.4938716
Skewness 0.6640192
Uncorrected SS 562.31
Coeff Variation 42.526434

4 By using this technique, you can read the value of any statistic that is created by any SAS procedure.
You can then use these values in subsequent computations in PROC IML. For example, if you want to
standardize the y vector, you can use the mean and standard deviation as computed by the UNIVARIATE
procedure, as shown in the following statements:

mean = nValue1[2];
stddev = nValue1[3];
stdQ = (q - mean)/stddev;
182 F Chapter 10: Submitting SAS Statements

Passing Parameters from SAS/IML Matrices

The SUBMIT statement enables you to substitute the values of a SAS/IML matrix into the statements that
are submitted to the SAS System. For example, the following program calls the UNIVARIATE procedure
to analyze data in the MyData data set that was created in the section Calling a Procedure on page 180.
The program submits SAS statements that are identical to the SUBMIT block in that section:

table = "Moments";
varName = "q";

submit table varName;


ods select &table;
proc univariate data=MyData;
var &varName;
ods output &table=&table;
run;
endsubmit;

You can list the names of SAS/IML matrices in the SUBMIT statement and refer to the contents of those
matrices inside the SUBMIT block. The syntax is reminiscent of the syntax for macro variables: an amper-
sand (&) preceding an expression means substitute the value of the expression. However, the substitution
takes place before the SUBMIT block is sent to the SAS System; no macro variables are actually created.
You can substitute values from character or numeric matrices and vectors. If x is a vector, then &x lists the
elements of x separated by spaces. For example, the following statements compute trimmed means for three
separate values of the TRIM= option:

table = "TrimmedMeans";
varName = "q";
n = {1, 3, 5}; /* number of observations to trim */

submit table varName n;


ods select &table;
proc univariate data=MyData trim=&n;
var &varName;
run;
endsubmit;

The output is shown in Figure 10.3. The values in the column labeled Number Trimmed in Tail corre-
spond to the values in the n matrix. These values were substituted into the TRIM= option in the PROC
UNIVARIATE statement.
Details of Parameter Substitution F 183

Figure 10.3 Statistics Read into SAS/IML Vectors

The UNIVARIATE Procedure


Variable: Q

Trimmed Means

Percent Number Std Error


Trimmed Trimmed Trimmed Trimmed 95% Confidence
in Tail in Tail Mean Mean Limits DF

7.14 1 5.741667 0.664486 4.279142 7.204191 11


21.43 3 5.575000 0.587204 4.186483 6.963517 7
35.71 5 5.625000 0.408613 4.324612 6.925388 3

Trimmed Means

Percent
Trimmed t for H0:
in Tail Mu0=0.00 Pr > |t|

7.14 8.64076 <.0001


21.43 9.49414 <.0001
35.71 13.76609 0.0008

Details of Parameter Substitution

The SUBMIT statement supports two kinds of parameter substitution: full substitution and specific substi-
tution.

Full Substitution

If you want to substitute many values into a SUBMIT block, it can be tedious to explicitly list the name
of every SAS/IML matrix that you reference. You can use an asterisk (*) in the SUBMIT statement as a
wildcard character to indicate that all SAS/IML matrices are available for parameter substitution. This is
called full substitution and is shown in the following statements:

proc iml;
DSName = "Sashelp.Class";
NumObs = 1;

submit *;
proc print data=&DSName(obs=&NumObs);
run;
endsubmit;
184 F Chapter 10: Submitting SAS Statements

Figure 10.4 Full Substitution

Obs Name Sex Age Height Weight

1 Alfred M 14 69 112.5

If the SUBMIT block contains a parameter reference (that is, a token that begins with an ampersand (&) for
which there is no matching SAS/IML matrix, the parameter reference is not modified prior to being sent to
the SAS language processor. In this way, you can reference SAS macro variables in a SUBMIT block.

Specific Substitution

A SUBMIT statement that contains an explicit list of parameters is easier to understand than a SUBMIT
statement that contains only the asterisk wildcard charater (*). Specifying an explicit list of parameters
is called specific substitution. Theseand only theseparameters are used to make substitutions into the
SUBMIT block.

proc iml;
DSName = "Sashelp.Class";
NumObs = 2;

submit DSName NumObs;


proc print data=&DSName(obs=&NumObs);
run;
endsubmit;

Figure 10.5 Specific Substitution

Obs Name Sex Age Height Weight

1 Alfred M 14 69.0 112.5


2 Alice F 13 56.5 84.0

If the SUBMIT block contains a parameter reference (that is, a token that begins with an ampersand (&)
for which there is no matching parameter, the parameter reference is not modified prior to being sent to the
SAS language processor. In this way, you can reference SAS macro variables in a SUBMIT block.
With specific substitution, you have additional options for specifying the value of a parameter. You can use
any of the following ways to specify the value of a parameter:

 Specify the name of a SAS/IML matrix to use for the value of a parameter, as shown in the following
statements:

s = "Sashelp.Class"; n = 2;

submit DSName=s NumObs=n;


proc print data=&DSName(obs=&NumObs);
run;
endsubmit;
Creating Graphics in a SUBMIT Block F 185

 Specify a literal value to use for the value of a parameter, as shown in the following statements:

submit DSName="Sashelp.Class" NumObs=2;


proc print data=&DSName(obs=&NumObs);
run;
endsubmit;

 Specify a matrix expression that is enclosed in parentheses, as shown in the following statements:

libref = "Sashelp";
fname = "Class";
NumObs = 2;

submit DSName=(libref+"."+fname) NumObs;


proc print data=&DSName(obs=&NumObs);
run;
endsubmit;

Creating Graphics in a SUBMIT Block

If you use the SUBMIT statement to call a SAS procedure that creates a graph, that graph is sent to the
current ODS destination. The following statements call the UNIVARIATE procedure, which creates a his-
togram as part of the analysis:

ods graphics on;

proc iml;
msg1 = "First PRINT Statement in PROC IML";
msg2 = "Second PRINT Statement in PROC IML";
print msg1;

submit;
ods select Moments Histogram;
proc univariate data=Sashelp.Class;
var Height;
histogram / kernel;
run;
endsubmit;

print msg2;
ods graphics off;

When you run the program, the PROC UNIVARIATE output is interleaved with the PROC IML output. The
output from the program is shown in Figure 10.6 through Figure 10.8.
186 F Chapter 10: Submitting SAS Statements

Figure 10.6 Output from PROC IML and from SUBMIT Block

msg1

First PRINT Statement in PROC IML

The UNIVARIATE Procedure


Variable: Height

Moments

N 19 Sum Weights 19
Mean 62.3368421 Sum Observations 1184.4
Std Deviation 5.12707525 Variance 26.2869006
Skewness -0.2596696 Kurtosis -0.1389692
Uncorrected SS 74304.92 Corrected SS 473.164211
Coeff Variation 8.22479143 Std Error Mean 1.17623173

Figure 10.7 Graphic Created in a SUBMIT Block


Handling Errors in a SUBMIT Block F 187

Figure 10.8 Further PROC IML Output

msg2

Second PRINT Statement in PROC IML

Handling Errors in a SUBMIT Block

After running a SUBMIT block, PROC IML continues to execute the remaining statements in the program.
However, if there is an error in the SUBMIT block, it might make sense to abort the program or to handle
the error in some other way.
The OK= option in the SUBMIT statement provides a limited form of error handling. If you specify the
OK= option, then PROC IML sets a matrix to the value 1 if the SUBMIT block executes without error.
Otherwise, the matrix is set to the value 0.
The following statements contain an error in a SUBMIT block: two letters are transposed when specifying
the name of a data set. Consequently, the isOK matrix is set to 0, and the program handles the error.

DSName = "Sashelp.calss"; /* mistyped name; data set does not exist */

submit DSName / ok=isOK;


proc univariate data=&DSName;
var Height;
ods output Moments=Moments;
run;
endsubmit;

if isOK then do; /* handle the no-error case */


use Moments;
read all var {"nValue1"} into m;
close Moments;
skewness = m[4]; /* get statistic from procedure output */
end;
else
skewness = .; /* handle an error */

print skewness;

Figure 10.9 The Result of Handling an Error in a SUBMIT Block

skewness

.
188 F Chapter 10: Submitting SAS Statements

Differences between the SUBMIT Statement in PROC IML and in


SAS/IML Studio
This section lists differences between the SUBMIT statement as implemented in IMLPlus (the programming
language used in SAS/IML Studio) and the SUBMIT statement in PROC IML:

 In PROC IML, macro variables that are created in a SUBMIT block are not accessible from outside
the SUBMIT block. In IMLPlus, the macro variables are available.

 In PROC IML, global SAS statements such as the LIBNAME, FILEREF, and OPTIONS statements
that are created in a SUBMIT block do not affect the environment outside the SUBMIT block. In
IMLPlus, librefs, filerefs, and options that are created inside the SUBMIT block continue to be defined
after the ENDSUBMIT statement.

 In PROC IML, ODS statements executed in a SUBMIT block do not affect the environment outside
the SUBMIT block. In IMLPlus, ODS statements can affect subsequent output.

 In PROC IML, a SUBMIT block clears the ODS SELECT and ODS EXCLUDE lists, even if the
SUBMIT block does not contain a DATA step or a procedure call. This does not occur in IMLPlus.

 In PROC IML, the SUBMIT statement causes a page break in some ODS destinations (such as the
LISTING destination). In IMLPlus, there is no unnecessary page break.
Chapter 11

Calling Functions in the R Language

Contents
Overview of Calling Functions in the R Language . . . . . . . . . . . . . . . . . . . . . . . 189
Installing the R Statistical Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
The RLANG System Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Submit R Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Transferring Data between SAS and R Software . . . . . . . . . . . . . . . . . . . . . . . . 192
Transfer from a SAS Source to an R Destination . . . . . . . . . . . . . . . . . . . . 193
Transfer from an R Source to a SAS Destination . . . . . . . . . . . . . . . . . . . . 193
Call an R Analysis from PROC IML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Using R to Analyze Data in SAS/IML Matrices . . . . . . . . . . . . . . . . . . . . . 194
Using R to Analyze Data in a SAS Data Set . . . . . . . . . . . . . . . . . . . . . . . 196
Passing Parameters to R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Call R Packages from PROC IML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Call R Graphics from PROC IML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
Handling Errors from R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
Details of Data Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
Numeric Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Logical Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Unsupported Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Special Numeric Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Date, Time, and Datetime Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
Time Series Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Differences from SAS/IML Studio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203

Overview of Calling Functions in the R Language

R is a freely available language and environment for statistical computing and graphics. Like the SAS/IML
language, the R language has features suitable for developers of statistical algorithms: the ability to ma-
nipulate matrices and vectors, a large number of built-in functions for computing statistical quantities, and
the capability to extend the basic function library by writing user-defined functions. There are also a large
number of user-contributed packages in R that implement specialized computations.
190 F Chapter 11: Calling Functions in the R Language

In 2009, the SAS/IML Studio application introduced a mechanism for calling R functions from programs
written in the IMLPlus language. As of SAS/IML 9.22, this feature is available in PROC IML. This chapter
shows you how to call R functions from PROC IML by using the SUBMIT and ENDSUBMIT statements.
This chapter describes how to configure the SAS system so that you can call functions in the R language.
The chapter also decribes how to do the following:

 transfer data to R

 call R functions from PROC IML

 transfer the results from R to a number of SAS data structures

Installing the R Statistical Software

SAS does not distribute R software. In order to call R software, you must first install R on the same
computer that runs SAS software. If you access a SAS workspace server through client software such as
SAS Enterprise Guide, then R must be installed on the SAS server.
You can download R from The Comprehensive R Archive Network Web site: http://cran.
r-project.org. If you experience problems installing R, consult the R FAQ: http://cran.
r-project.org/doc/FAQ/R-FAQ.html. SAS Technical Support does not provide support for in-
stalling or configuring third-party software.
In SAS/IML, the interface to R is supported on computers that run a 32-bit or 64-bit Windows operating
system or Linux operating systems. If you are using SAS software in a 64-bit Linux environment, you must
download a 64-bit binary distribution of R. Otherwise, download a 32-bit binary distribution.
The document Installing R on Linux Operating Systems is available on support.sas.comand includes
pointers for installing R on Linux that it works with the SAS interface to R.

The RLANG System Option

The RLANG system option determines whether you have permission to call R from the SAS system. You
can determine the value of the RLANG option by submitting the following SAS statements:

proc options option=RLANG;


run;

The result is one of the following statements in the SAS log:

NORLANG Do not support access to R language interfaces


If the SAS log contains this statement, you do not have permission to call R from the SAS system.
Submit R Statements F 191

RLANG Support access to R language interfaces


If the SAS log contains this statement, you can call R from the SAS system.

The RLANG option can be changed only at SAS start-up. In order to call R, the SAS system must be
launched with the -RLANG option. (It is often convenient to insert this option in a SASV9.CFG file.) For
security reasons, some system administrators configure the SAS system to start with the -NORLANG op-
tion. The RLANG option is similar to the XCMD option in that both options enable SAS users to potentially
write or delete important data and system files.
If you attempt to submit R statements on a system that was not launched with the -RLANG option, you get
the following error message:

ERROR: The RLANG system option must be specified in the SAS configuration file or on the
SAS invocation command line to enable the submission of R language statements.

Some operating systems do not support the RLANG system option. The RLANG system option is currently
supported for the Windows and Linux operating systems. If you attempt to submit R statements on a host
that does not support the RLANG option, you get the following warning message:

WARNING: SAS option RLANG is not supported on this host.

Submit R Statements

In order to call R from the SAS system, the R statistical software must be installed on the SAS workspace
server and the RLANG system option must be enabled. (See The RLANG System Option on page 190.)
Chapter 10, Submitting SAS Statements, describes how to submit SAS statements from PROC IML.
Submitting R statements is similar. You use a SUBMIT statement, but add the R option: SUBMIT / R. All
statements in the program between the SUBMIT statement and the next ENDSUBMIT statement are sent
to R for execution. The ENDSUBMIT statement must appear on a line by itself.
The simplest program that calls R is one that does not transfer any data between the two environments. In
the following program, SAS/IML is used to compute the product of a matrix and a vector. The result is
printed. Then the SUBMIT statement with the R option is used to send an equivalent set of statements to R.

proc iml;
/* Comparison of matrix operations in IML and R */
print "---------- SAS/IML Results -----------------";
x = 1:3; /* vector of sequence 1,2,3 */
m = {1 2 3, 4 5 6, 7 8 9}; /* 3 x 3 matrix */
q = m * t(x); /* matrix multiplication */
print q;
192 F Chapter 11: Calling Functions in the R Language

print "------------- R Results --------------------";


submit / R;
rx <- matrix( 1:3, nrow=1) # vector of sequence 1,2,3
rm <- matrix( 1:9, nrow=3, byrow=TRUE) # 3 x 3 matrix
rq <- rm %*% t(rx) # matrix multiplication
print(rq)
endsubmit;

The printed output from R is automatically routed to the SAS/IML Studio output window, as shown in
Figure 11.1. As expected, the result of the computation is the same in R as in SAS/IML.

Figure 11.1 Output from SAS/IML and R

---------- SAS/IML Results -----------------

14
32
50

------------- R Results --------------------

[,1]
[1,] 14
[2,] 32
[3,] 50

Transferring Data between SAS and R Software

Many research statisticians take advantage of special-purpose functions and packages written in the R lan-
guage. When you call an R function, the data must be accessible to R, either in a data frame or in an R
matrix. This section describes how you can transfer data and statistical results (for example, fitted values or
parameter estimates) between SAS and R data structures.
You can transfer data to and from the following SAS data structures:

 a SAS data set in a libref

 a SAS/IML matrix

In addition, you can transfer data to and from the following R data structures:

 an R data frame

 an R matrix
Transfer from an R Source to a SAS Destination F 193

Transfer from a SAS Source to an R Destination

Table 11.1 summarizes the subroutines that copy data from a SAS source to an R destination. For more
information, see the section Details of Data Transfer on page 200.

Table 11.1 Transferring from a SAS Source to an R Destination


Subroutine SAS Source R Destination
ExportDataSetToR SAS data set R data frame
ExportMatrixToR SAS/IML matrix R matrix

As a simple example, the following program transfers a data set from the Sashelp libref into an R data
frame named df. The program then submits an R statement that displays the names of the variables in the
data frame.

proc iml;
call ExportDataSetToR("Sashelp.Class", "df" );
submit / R;
names(df)
endsubmit;

The R names function produces the output shown in Figure 11.2.

Figure 11.2 Result of Sending Data to R

[1] "Name" "Sex" "Age" "Height" "Weight"

Transfer from an R Source to a SAS Destination

You can transfer data and results from R data frames or matrices to a SAS data set or a SAS/IML matrix.
Table 11.2 summarizes the frequently used methods that copy from an R source to a SAS destination.

Table 11.2 Transferring from an R Source to a SAS Destination


Subroutine R Source SAS Destination
ImportDataSetFromR R expression SAS data set
ImportMatrixFromR R expression SAS/IML matrix

The next section includes an example of calling an R analysis. Some of the results from the analysis are
then transferred into SAS/IML matrices.
The result of an R analysis can be a complicated structure. In order to transfer an R object via the previously
194 F Chapter 11: Calling Functions in the R Language

mentioned methods and modules, the object must be coercible to a data frame. (The R object m can be
coerced to a data frame provided that the function as.data.frame(m) succeeds.) There are many data
structures that cannot be coerced into data frames. As the example in the next section shows, you can use R
statements to extract and transfer simpler objects.

Call an R Analysis from PROC IML

You can use the techniques in Chapter 10, Submitting SAS Statements, to perform a linear regression
by calling a regression procedure (such as REG, GLM, or MIXED) in SAS/STAT software. This section
presents examples of submitting statements to R to perform a linear regression. The first example performs
a linear regression on data that are transferred from SAS/IML vectors. The second example performs an
identical analysis on data that are transferred from a SAS data set.

Using R to Analyze Data in SAS/IML Matrices

The program in this section consists of four parts:

1. Read the data into SAS/IML vectors.

2. Transfer the data to R.

3. Call R functions to analyze the data.

4. Transfer the results of the analysis into SAS/IML vectors.

1 Read the data. The following statements read the Weight and Height variables from the Sashelp.Class
data set into SAS/IML vectors with the same names:

proc iml;
use Sashelp.Class;
read all var {Weight Height};
close Sashelp.Class;

2 Transfer the data to R. The following statements run the ExportMatrixToR subroutine in order to transfer
data from a SAS/IML matrix into an R matrix. The names of the corresponding R vectors that contain
the data are w and h.

/* send matrices to R */
call ExportMatrixToR(Weight, "w");
call ExportMatrixToR(Height, "h");

3 Call R functions to perform some analysis. The SUBMIT statement with the R option is used to send
statements to R. Comments in R begin with a hash mark (#, also called a number sign or a pound sign).
Using R to Analyze Data in SAS/IML Matrices F 195

submit / R;
Model <- lm(w ~ h, na.action="na.exclude") # a
ParamEst <- coef(Model) # b
Pred <- fitted(Model)
Resid <- residuals(Model)
endsubmit;

The R program consists of the following steps:

a. The lm function computes a linear model of w as a function of h. The na.action= option speci-
fies how the model handles missing values (which in R are represented by NA). In particular, the
na.exclude option specifies that the lm function should not omit observations with missing values
from residual and predicted values. This option makes it easier to merge the R results with the
original data when the data contain missing values.
b. Various information is retrieved from the linear model and placed into R vectors named ParamEst,
Pred, and Resid.

4 Transfer the data from R. The ImportMatrixFromR subroutine transfers the ParamEst vector from R into
a SAS/IML vector named pe. This vector is printed by the SAS/IML PRINT statement. The predicted
values (Pred) and residual values (Resid) can be transferred similarly. The parameter estimates are used
to compute the predicted values for a series of hypothetical heights, as shown in Figure 11.3.

call ImportMatrixFromR(pe, "ParamEst");


print pe[r={"Intercept" "Height"}];

ht = T( do(55, 70, 5) );
A = j(nrow(ht),1,1) || ht;
pred_wt = A * pe;
print ht pred_wt;

Figure 11.3 Results from an R Analysis

pe

Intercept -143.0269
Height 3.8990303

ht pred_wt

55 71.419746
60 90.914898
65 110.41005
70 129.9052

You cannot directly transfer the contents of the Model object. Instead, various R functions are used to extract
portions of the Model object, and those simpler pieces are transferred.
196 F Chapter 11: Calling Functions in the R Language

Using R to Analyze Data in a SAS Data Set

As an alternative to the data transfer statements in the previous section, you can call the ExportDataSetToR
subroutine to transfer the entire SAS data set to an R data frame. For example, you could use the following
statements to create an R data frame named Class and to model the Weight variable:

call ExportDataSetToR("Sashelp.Class", "Class");


submit / R;
Model <- lm(Weight ~ Height, data=Class, na.action="na.exclude")
endsubmit;

The R language is case-sensitive so you must use the correct case to refer to variables in a data frame.
You can use the CONTENTS function in the SAS/IML language to obtain the names and capitalization of
variables in a SAS data set.

Passing Parameters to R

The SUBMIT statement supports parameter substitution from SAS/IML matrices as detailed in Passing
Parameters from SAS/IML Matrices on page 182. For example, you can substitute the names of analysis
variables into a SUBMIT block by using the following statements:

YVar = "Weight";
XVar = "Height";
submit XVar YVar / R;
Model <- lm(&YVar ~ &XVar, data=Class, na.action="na.exclude")
print (Model$call)
endsubmit;

Figure 11.4 shows the result of the print(Model$call) statement. The output shows that the values of the
YVar and XVar matrices were substituted into the SUBMIT block.

Figure 11.4 Parameter Substitutions in a SUBMIT Block

lm(formula = Weight ~ Height, data = Class, na.action = "na.exclude")

Call R Packages from PROC IML

You do not need to do anything special to call an R package. Provided that an R package is installed, you can
call library(package) from inside a SUBMIT block to load the package. You can then call the functions
in the package.
Call R Packages from PROC IML F 197

The example in this section calls an R package and imports the results into a SAS data set. This example is
similar to the example in Creating Graphics in a SUBMIT Block on page 185, which calls the UNIVARI-
ATE procedure to create a kernel density estimate. The program in this section consists of the following
steps:

1. Define the data and transfer the data to R.

2. Call R functions to analyze the data.

3. Transfer the results of the analysis into SAS/IML vectors.

1 Define the data in the SAS/IML vector q and then transfer the data to R by using the ExportMatrixToR
subroutine. In R, the data are stored in a vector named rq.

proc iml;
q = {3.7, 7.1, 2, 4.2, 5.3, 6.4, 8, 5.7, 3.1, 6.1, 4.4, 5.4, 9.5, 11.2};
RVar = "rq";
call ExportMatrixToR( q, RVar );

2 Load the KernSmooth package. Because the functions in the KernSmooth package do not handle missing
values, the nonmissing values in q must be copied to a matrix p. (There are no missing values in this
example.) The Sheather-Jones plug-in bandwidth is computed by calling the dpik function in the KernS-
mooth package. This bandwidth is used in the bkde function (in the same package) to compute a kernel
density estimate.

submit RVar / R;
library(KernSmooth)
idx <-which(!is.na(&RVar)) # must exclude missing values (NA)
p <- &RVar[idx] # from KernSmooth functions
h = dpik(p) # Sheather-Jones plug-in bandwidth
est <- bkde(p, bandwidth=h) # est has 2 columns
endsubmit;

3 Copy the results into a SAS data set or a SAS/IML matrix, and perform additional computations. For
example, the following statements use the trapezoidal rule to numerically estimate the density that is
contained in the tail of the density estimate of the data:

call ImportMatrixFromR( m, "est" );


/* estimate the density for q >= 8 */
x = m[,1]; /* x values for density */
idx = loc( x>=8 ); /* find values x >= 8 */
y = m[idx, 2]; /* extract corresponding density values */

/* Use the trapezoidal rule to estimate the area under the density curve.
The area of a trapezoid with base w and heights h1 and h2 is
w*(h1+h2)/2. */
w = m[2,1] - m[1,1];
h1 = y[1:nrow(y)-1];
h2 = y[2:nrow(y)];
Area = w * sum(h1+h2) / 2;
198 F Chapter 11: Calling Functions in the R Language

print Area;

The numerical estimate for the conditional density is shown in Figure 11.5. The estimate is shown graph-
ically in Figure 11.6, where the conditional density corresponds to the shaded area in the figure. Fig-
ure 11.6 was created by using the SGPLOT procedure to display the density estimate computed by the R
package.

Figure 11.5 Computation That Combines SAS/IML and R Computations


Area

0.2118117

Figure 11.6 Estimated Density for x  8


Call R Graphics from PROC IML F 199

Call R Graphics from PROC IML

R can create graphics in a separate window which, by default, appears on the same computer on which R is
running. If you are running PROC IML and R locally on your desktop or laptop computer, you can display
R graphics. However, if you are running client software that connects with a remote SAS server that is
running PROC IML and R, then R graphics might be disabled.
The following statements describe some common scenarios for running a PROC IML program:

 If you run PROC IML through a SAS Display Manager Session (DMS), you can create R graphics
from your PROC IML program. The graph appears in the standard R graphics window.

 If you run PROC IML through SAS Enterprise Guide, the display of R graphics is disabled because, in
general, the SAS server (and therefore R) is running on a different computer than the SAS Enterprise
Guide application.

 If you run PROC IML from interactive line mode or from batch mode, then R graphics are disabled.

You can determine whether R graphics are enabled by calling the interactive function in the R language.
For example, the previous section used R to compute a kernel density estimate for some data. If you are
running PROC IML through SAS DMS, you can create a histogram and overlay the kernel density estimate
by using the following statements:

submit / R;
hist(p, freq=FALSE) # histogram
lines(est) # kde overlay
endsubmit;

The hist function creates a histogram of the data in the p matrix, and the lines function adds the kernel
density estimate contained in the est matrix. The R graphics window contains the histogram, which is
shown in Figure 11.7.
200 F Chapter 11: Calling Functions in the R Language

Figure 11.7 R Graphics

Handling Errors from R

If you submit R code that causes an error, you can attempt to handle the error by using the OK= option in
the SUBMIT statement, as described in Handling Errors in a SUBMIT Block on page 187.

Details of Data Transfer

This section describes how data are transferred between SAS and R software. It includes a discussion of
numerical data types, missing values, and data that represent dates and times.
Numeric Data Types F 201

Numeric Data Types

R can store numeric data in either an integer or a double-precision data type. When transferring R data to a
SAS data type, integers types are converted to double precision.

Logical Data Types

R provides a logical data type for storing the values TRUE and FALSE. When logical data are transferred to
a SAS data type, the value TRUE is converted to the number 1 and the value FALSE to the number 0.

Unsupported Data Types

R provides two data types that are not converted to a SAS data type: complex and raw. It is an error to
attempt to transfer data stored in either of these data types to a SAS data type.

Special Numeric Values

The R language has four symbols that are used to represent special numerical values.

 The symbol NA represents a missing value.

 The symbol Inf represents positive infinity.

 The symbol -Inf represents positive infinity.

 The symbol NaN represents a NaN, which is a floating-point value that represents an undefined value
such as the result of the division 0=0.

The SAS language has 28 symbols that are used to represent special numerical values.

 The symbol . represents a generic missing value.

 The symbols .A.Z and ._ are also missing values. Some applications use .I to represent positive
infinity and use .M to represent negative infinity.

The following table shows how special numeric values in R are converted to SAS missing values:
202 F Chapter 11: Calling Functions in the R Language

Value in R SAS Missing Value


Inf .I
Inf .M
NA .
NaN .

The following table shows how SAS missing values are converted when data are transferred to R:

SAS Missing Value Value in R


.I Inf
.M Inf
All others NA

Date, Time, and Datetime Values

R supports date and time data differently than does SAS software. In SAS software, variables that represent
dates or times are assigned a format such as DATE9. or TIME5. In R, classes are used to represent dates
and times.
When a variable in a SAS data set is transferred to R software, the variables format is examined and the
following occurs:

 If the format is in the family of date formats (for example, DATEw.d), the variable in R is assigned
the Date class.
 If the format is in the family of datetime formats (for example, DATETIMEw.d) or time formats (for
example, TIMEw.d), the variable in R is assigned the POSIXct and POSIXt classes.
 In all other cases, the variable in R is assigned the numeric class.

When a variable in an R data frame is transferred to SAS software, the variables class is examined and the
following occurs:

 If the variables class is Date, the corresponding SAS variable is assigned the DATE9. format.
 If the variables class is POSIXt, the corresponding SAS variable is assigned the DATETIME19.
format.
 In all other cases, the SAS variable is not assigned a format.

Time Series Data

In SAS, the sampling times for time series data are often stored in a separate variable. In R, the sampling
times for a time series object are specified by the tsp attribute. When a time series object in R is transferred
to SAS software, the following occurs:
Data Structures F 203

 The R time function is used to generate a vector of the times at which the time series is sampled.

 A new variable named VarName_ts is created, where VarName is the name of the time series object
in R. The variable contains sampling times for the time series.

No special processing of time series data is performed when data are transferred from SAS to R software.

Data Structures

R provides a wide range of built-in and user-defined data structures. When data are transferred from R to
SAS software, the data are coerced to a data frame prior to the transfer. If the coersion fails, the data are not
transferred.
The section Using R to Analyze Data in SAS/IML Matrices on page 194 presents an example of an R
object that cannot be directly imported to SAS software and shows how to use R functions to extract simpler
data structures from the R object.

Differences from SAS/IML Studio

This section lists differences between the R option in the SUBMIT statement as implemented in SAS/IML
Studio and the same option in PROC IML:

 In PROC IML, R must be installed on the computer that runs the SAS server. In SAS/IML Studio, R
must be installed on the computer that runs the SAS/IML Studio application.

 If R is installed on a SAS workspace server and is accessed through SAS Enterprise Guide, everyone
that connects to that server uses the same version of R and the same set of installed packages. In
SAS/IML Studio, R is installed locally on the client computer, so each user can potentially have a
different version of R and different packages.
204
Chapter 12

Robust Regression Examples

Contents
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Using LMS and LTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Example 12.1: Substantial Leverage Points . . . . . . . . . . . . . . . . . . . . . . . 208
Example 12.2: Comparison of LMS, V7 LTS, and FAST-LTS . . . . . . . . . . . . . 211
Example 12.3: LMS and LTS Univariate (Location) Problem: Barnett and Lewis Data 218
Using MVE and MCD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Example 12.4: Brainlog Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Example 12.5: Stackloss Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Combining Robust Residual and Robust Distance . . . . . . . . . . . . . . . . . . . . . . . 238
Example 12.6: Hawkins-Bradu-Kass Data . . . . . . . . . . . . . . . . . . . . . . . 239
Example 12.7: Stackloss Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246

Overview

SAS/IML has four subroutines that can be used for outlier detection and robust regression. The Least
Median of Squares (LMS) and Least Trimmed Squares (LTS) subroutines perform robust regression (some-
times called resistant regression). These subroutines are able to detect outliers and perform a least-squares
regression on the remaining observations. The Minimum Volume Ellipsoid Estimation (MVE) and Mini-
mum Covariance Determinant Estimation (MCD) subroutines can be used to find a robust location and a
robust covariance matrix that can be used for constructing confidence regions, detecting multivariate outliers
and leverage points, and conducting robust canonical correlation and principal component analysis.
The LMS, LTS, MVE, and MCD methods were developed by Rousseeuw (1984) and Rousseeuw and Leroy
(1987). All of these methods have the high breakdown value property. Roughly speaking, the breakdown
value is a measure of the proportion of contamination that a procedure can withstand and still maintain its
robustness.
The algorithm used in the LMS subroutine is based on the PROGRESS program of Rousseeuw and Hubert
(1996), which is an updated version of Rousseeuw and Leroy (1987). In the special case of regression
through the origin with a single regressor, Barreto and Maharry (2006) show that the PROGRESS algorithm
does not, in general, find the slope that yields the least median of squares. Starting with release 9.2, the LMS
subroutine uses the algorithm of Barreto and Maharry (2006) to obtain the correct LMS slope in the case of
206 F Chapter 12: Robust Regression Examples

regression through the origin with a single regressor. In this case, inputs to the LMS subroutine specific to
the PROGRESS algorithm are ignored and output specific to the PROGRESS algorithm is suppressed.
The algorithm used in the LTS subroutine is based on the algorithm FAST-LTS of Rousseeuw and
Van Driessen (2000). The MCD algorithm is based on the FAST-MCD algorithm given by Rousseeuw
and Van Driessen (1999), which is similar to the FAST-LTS algorithm. The MVE algorithm is based on
the algorithm used in the MINVOL program by Rousseeuw (1984). LTS estimation has higher statistical
efficiency than LMS estimation. With the FAST-LTS algorithm, LTS is also faster than LMS for large data
sets. Similarly, MCD is faster than MVE for large data sets.
Besides LTS estimation and LMS estimation, there are other methods for robust regression and outlier
detection. You can refer to a comprehensive procedure, PROC ROBUSTREG, in SAS/STAT. A summary
of these robust tools in SAS can be found in Chen (2002).
The four SAS/IML subroutines are designed for the following:

 LMS: minimizing the hth ordered squared residual

 LTS: minimizing the sum of the h smallest squared residuals

 MCD: minimizing the determinant of the covariance of h points

 MVE: minimizing the volume of an ellipsoid that contains h points

where h is defined in the range


N 3N nC1
C1  h  C
2 4 4
In the preceding equation, N is the number of observations and n is the number of regressors.1 The value
of h determines the breakdown value, which is the smallest fraction of contamination that can cause the
estimator T to take on values arbitrarily far from T .Z/ (Rousseeuw and Leroy 1987). Here, T denotes an
estimator and T .Z/ applies T to a sample Z.
For each parameter vector b D .b1 ; : : : ; bn /, the residual of observation i is ri D yi xi b. You then denote
the ordered, squared residuals as

.r 2 /1WN  : : :  .r 2 /N WN

The objective functions for the LMS, LTS, MCD, and MVE optimization problems are defined as follows:

 LMS, the objective function for the LMS optimization problem is the hth ordered squared residual,

FLMS D .r 2 /hWN ! min

Note that, for h D N=2


h C 1, the
i hth quantile is the median of the squared residuals. The default h in
N CnC1
PROGRESS is h D 2 , which yields the breakdown value (where k denotes the integer part
of k).

1 The value of h can be specified, but in most applications the default value works just fine and the results seem to be quite

stable with different choices of h.


Using LMS and LTS F 207

 LTS, the objective function for the LTS optimization problem is the sum of the h smallest ordered
squared residuals,

v
u h
u1 X
FLTS Dt .r 2 /iWN ! min
h
i D1

 MCD, the objective function for the MCD optimization problem is based on the determinant of the
covariance of the selected h points,

FMCD D det.Ch / ! min

where Ch is the covariance matrix of the selected h points.

 MVE, the objective function for the MVE optimization problem is based on the hth quantile dhWN of
the Mahalanobis-type distances d D .d1 ; : : : ; dN /,
p
FMVE D dhWN det.C/ ! min
q
subject to dhWN D 2n;0:5 , where C is the scatter matrix estimate, and the Mahalanobis-type dis-
tances are computed as
q
d D diag. .X T /T C 1 .X T //

where T is the location estimate.

Because of the nonsmooth form of these objective functions, the estimates cannot be obtained with tradi-
tional optimization algorithms. For LMS and LTS, the algorithm, as in the PROGRESS program, selects a
number of subsets of n observations out of the N given observations, evaluates the objective function, and
saves the subset with the lowest objective function. As long as the problem size enables you to evaluate
all such subsets, the result is a global optimum. If computing time does not permit you to evaluate all the
different subsets, a random collection of subsets is evaluated. In such a case, you might not obtain the global
optimum.
Note that the LMS, LTS, MCD, and MVE subroutines are executed only when the number N of observations
is more than twice the number n of explanatory variables xj (including the intercept)that is, if N > 2n.

Using LMS and LTS

Because of space considerations, the output of the tables that contains residuals and resistant diagnostics are
not included in this document. The SCATLMTS and LMSDIAP are used in these examples for printing and
plotting the results. The PRILMTS module can be used for printing the results. These routines are in the
robustmc.sas file that is contained in the sample library.
208 F Chapter 12: Robust Regression Examples

Example 12.1: LMS and LTS with Substantial Leverage Points:


Hertzsprung-Russell Star Data

The following data are reported in Rousseeuw and Leroy (1987) and are based on Humphreys (1978) and
Vansina and De Greve, J. P. (1982). The 47 observations correspond to the 47 stars of the CYG OB1 cluster
in the direction of the constellation Cygnus. The regressor variable (column 2) x is the logarithm of the
effective temperature at the surface of the star (Te ), and the response variable (column 3) y is the logarithm
of its light intensity (L=L0 ). The results for LS and LMS on page 28 of Rousseeuw and Leroy (1987) are
based on a more precise (five decimal places) version of the data set. This data set is remarkable in that it
contains four substantial leverage points (which represent giant stars) that greatly affect the results of L2
and even L1 regression. The high leverage points are observations 11, 20, 30, and 34.

ab = { 1 4.37 5.23, 2 4.56 5.74, 3 4.26 4.93,


4 4.56 5.74, 5 4.30 5.19, 6 4.46 5.46,
7 3.84 4.65, 8 4.57 5.27, 9 4.26 5.57,
10 4.37 5.12, 11 3.49 5.73, 12 4.43 5.45,
13 4.48 5.42, 14 4.01 4.05, 15 4.29 4.26,
16 4.42 4.58, 17 4.23 3.94, 18 4.42 4.18,
19 4.23 4.18, 20 3.49 5.89, 21 4.29 4.38,
22 4.29 4.22, 23 4.42 4.42, 24 4.49 4.85,
25 4.38 5.02, 26 4.42 4.66, 27 4.29 4.66,
28 4.38 4.90, 29 4.22 4.39, 30 3.48 6.05,
31 4.38 4.42, 32 4.56 5.10, 33 4.45 5.22,
34 3.49 6.29, 35 4.23 4.34, 36 4.62 5.62,
37 4.53 5.10, 38 4.45 5.22, 39 4.53 5.18,
40 4.43 5.57, 41 4.38 4.62, 42 4.45 5.06,
43 4.50 5.34, 44 4.45 5.34, 45 4.55 5.54,
46 4.45 4.98, 47 4.42 4.50 } ;

a = ab[,2]; b = ab[,3];

The following statements specify that most of the output be printed:

print "*** Hertzsprung-Russell Star Data: Do LMS ***";


optn = j(9,1,.);
optn[2]= 3; /* ipri */
optn[3]= 3; /* ilsq */
optn[8]= 3; /* icov */

call lms(sc,coef,wgt,optn,b,a);

Some simple statistics for the independent and response variables are shown in Output 12.1.1.
Example 12.1: LMS and LTS with Substantial Leverage Points: Hertzsprung-Russell Star Data F 209

Output 12.1.1 Some Simple Statistics

Median and Mean

Median Mean

VAR1 4.42 4.31


Intercep 1 1
Response 5.1 5.0121276596

Dispersion and Standard Deviation

Dispersion StdDev

VAR1 0.163086244 0.2908234187


Intercep 0 0
Response 0.6671709983 0.5712493409

Partial output for LS regression is shown in Output 12.1.2.

Output 12.1.2 Table of Unweighted LS Regression

LS Parameter Estimates

Approx Pr >
Variable Estimate Std Err t Value |t| Lower WCI Upper WCI

VAR1 -0.4133039 0.28625748 -1.44 0.1557 -0.9743582 0.14775048


Intercep 6.7934673 1.23651563 5.49 <.0001 4.3699412 9.21699339

Cov Matrix of Parameter Estimates

VAR1 Intercep

VAR1 0.0819433428 -0.353175807


Intercep -0.353175807 1.5289708954

Output 12.1.3 displays the iteration history. Looking at the column Best Crit in the iteration history table,
you see that, with complete enumeration, the optimal solution is quickly found.

Output 12.1.3 History of the Iteration Process

Best
Subset Singular Criterion Percent

271 5 0.392791 25
541 8 0.392791 50
811 27 0.392791 75
1081 45 0.392791 100

The results of the optimization for LMS estimation are displayed in Output 12.1.4.
210 F Chapter 12: Robust Regression Examples

Output 12.1.4 Results of Optimization

Observations of Best Subset

2 29

Estimated Coefficients

VAR1 Intercep

3.9705882353 -12.62794118

Output 12.1.5 displays the results for WLS regression. Due to the size of the scaled residuals, six observa-
tions (with numbers 7, 9, 11, 20, 30, 34) were assigned zero weights in the following WLS analysis.
The LTS regression implements the FAST-LTS algorithm, which improves the algorithm (used in SAS/IML
Version 7 and earlier versions, denoted as V7 LTS in this chapter) in Rousseeuw and Leroy (1987) by using
techniques called selective iteration and nested extensions. These techniques are used in the C-steps of
the algorithm. See Rousseeuw and Van Driessen (2000) for details. The FAST-LTS algorithm significantly
improves the speed of computation.

Output 12.1.5 Table of Weighted LS Regression Based on LMS

RLS Parameter Estimates Based on LMS

Approx Pr >
Variable Estimate Std Err t Value |t| Lower WCI Upper WCI

VAR1 3.04615694 0.43733923 6.97 <.0001 2.18898779 3.90332608


Intercep -8.5000549 1.92630783 -4.41 <.0001 -12.275549 -4.7245609

Cov Matrix of Parameter Estimates

VAR1 Intercep

VAR1 0.1912656038 -0.842128459


Intercep -0.842128459 3.7106618752

The following statements implement the LTS regression on the Hertzsprung-Russell star data:

print "*** Hertzsprung-Russell Star Data: Do LTS ***";


optn = j(9,1,.);
optn[2]= 3; /* ipri */
optn[3]= 3; /* ilsq */
optn[8]= 3; /* icov */

call lts(sc,coef,wgt,optn,b,a);

Output 12.1.6 summarizes the information for the LTS optimization.


Example 12.2: Comparison of LMS, V7 LTS, and FAST-LTS F 211

Output 12.1.6 Summary of Optimization

2 4 6 10 13 15 17 19 21 22 25 27 28

29 33 35 36 38 39 41 42 43 44 45 46

Output 12.1.7 displays the optimization results and Output 12.1.8 displays the weighted LS regression based
on LTS estimates.
Output 12.1.7 Results of Optimization

Estimated Coefficients

VAR1 Intercep

4.219182102 -13.6239903

Output 12.1.8 Table of Weighted LS Regression Based on LTS

RLS Parameter Estimates Based on LTS

Approx Pr >
Variable Estimate Std Err t Value |t| Lower WCI Upper WCI

VAR1 3.04615694 0.43733923 6.97 <.0001 2.18898779 3.90332608


Intercep -8.5000549 1.92630783 -4.41 <.0001 -12.275549 -4.7245609

Cov Matrix of Parameter Estimates

VAR1 Intercep

VAR1 0.1912656038 -0.842128459


Intercep -0.842128459 3.7106618752

Example 12.2: Comparison of LMS, V7 LTS, and FAST-LTS

The following example presents comparisons of LMS, V7 LTS, and FAST-LTS. The data analyzed are the
stackloss data of Brownlee (1965), which are also used for documenting the L1 regression module. The
three explanatory variables correspond to measurements for a plant oxidizing ammonia to nitric acid on 21
consecutive days:

 x1 air flow to the plant

 x2 cooling water inlet temperature

 x3 acid concentration
212 F Chapter 12: Robust Regression Examples

The response variable yi gives the permillage of ammonia lost (stackloss). The following data are also given
in Rousseeuw and Leroy (1987) and Osborne (1985):

print "Stackloss Data";


aa = { 1 80 27 89 42,
1 80 27 88 37,
1 75 25 90 37,
1 62 24 87 28,
1 62 22 87 18,
1 62 23 87 18,
1 62 24 93 19,
1 62 24 93 20,
1 58 23 87 15,
1 58 18 80 14,
1 58 18 89 14,
1 58 17 88 13,
1 58 18 82 11,
1 58 19 93 12,
1 50 18 89 8,
1 50 18 86 7,
1 50 19 72 8,
1 50 19 79 8,
1 50 20 80 9,
1 56 20 82 15,
1 70 20 91 15 };

Rousseeuw and Leroy (1987) cite a large number of papers in which the preceding data set was analyzed.
They state that most researchers concluded that observations 1, 3, 4, and 21 were outliers and that some
people also reported observation 2 as an outlier.

Consider 2,000 Random Subsets for LMS

For N D 21 and n D 4 (three explanatory variables including intercept), you obtain a total of 5,985 different
subsets of 4 observations out of 21. If you do not specify OPTN[5], the LMS algorithms draw Nrep D 2000
random sample subsets. Since there is a large number of subsets with singular linear systems that you do
not want to print, you can choose OPTN[2]=2 for reduced printed output, as in the following:

title2 "***Use 2000 Random Subsets for LMS***";


a = aa[,2:4]; b = aa[,5];
optn = j(9,1,.);
optn[2]= 2; /* ipri */
optn[3]= 3; /* ilsq */
optn[8]= 3; /* icov */

call lms(sc,coef,wgt,optn,b,a);

Summary statistics are shown in Output 12.2.1. Output 12.2.2 displays the results of LS regression. Out-
put 12.2.3 displays the LMS results for the 2000 random subsets.
Example 12.2: Comparison of LMS, V7 LTS, and FAST-LTS F 213

Output 12.2.1 Some Simple Statistics

***Use 2000 Random Subsets for LMS***

Median and Mean

Median Mean

VAR1 58 60.428571429
VAR2 20 21.095238095
VAR3 87 86.285714286
Intercep 1 1
Response 15 17.523809524

Dispersion and Standard Deviation

Dispersion StdDev

VAR1 5.930408874 9.1682682584


VAR2 2.965204437 3.160771455
VAR3 4.4478066555 5.3585712381
Intercep 0 0
Response 5.930408874 10.171622524

Output 12.2.2 Table of Unweighted LS Regression

LS Parameter Estimates

Approx Pr >
Variable Estimate Std Err t Value |t| Lower WCI Upper WCI

VAR1 0.7156402 0.13485819 5.31 <.0001 0.45132301 0.97995739


VAR2 1.29528612 0.36802427 3.52 0.0026 0.57397182 2.01660043
VAR3 -0.1521225 0.15629404 -0.97 0.3440 -0.4584532 0.15420818
Intercep -39.919674 11.8959969 -3.36 0.0038 -63.2354 -16.603949

Cov Matrix of Parameter Estimates

VAR1 VAR2 VAR3 Intercep

VAR1 0.0181867302 -0.036510675 -0.007143521 0.2875871057


VAR2 -0.036510675 0.1354418598 0.0000104768 -0.651794369
VAR3 -0.007143521 0.0000104768 0.024427828 -1.676320797
Intercep 0.2875871057 -0.651794369 -1.676320797 141.51474107
214 F Chapter 12: Robust Regression Examples

Output 12.2.3 Iteration History and Optimization Results

Best
Subset Singular Criterion Percent

500 23 0.163262 25
1000 55 0.140519 50
1500 79 0.140519 75
2000 103 0.126467 100

Observations of Best Subset

15 11 19 10

Estimated Coefficients

VAR1 VAR2 VAR3 Intercep

0.75 0.5 0 -39.25

For LMS, observations 1, 3, 4, and 21 have scaled residuals larger than 2.5 (output not shown), and they are
considered outliers. Output 12.2.4 displays the corresponding WLS results.

Output 12.2.4 Table of Weighted LS Regression

RLS Parameter Estimates Based on LMS

Approx Pr >
Variable Estimate Std Err t Value |t| Lower WCI Upper WCI

VAR1 0.79768556 0.06743906 11.83 <.0001 0.66550742 0.9298637


VAR2 0.57734046 0.16596894 3.48 0.0041 0.25204731 0.9026336
VAR3 -0.0670602 0.06160314 -1.09 0.2961 -0.1878001 0.05367975
Intercep -37.652459 4.73205086 -7.96 <.0001 -46.927108 -28.37781

Cov Matrix of Parameter Estimates

VAR1 VAR2 VAR3 Intercep

VAR1 0.0045480273 -0.007921409 -0.001198689 0.0015681747


VAR2 -0.007921409 0.0275456893 -0.00046339 -0.065017508
VAR3 -0.001198689 -0.00046339 0.0037949466 -0.246102248
Intercep 0.0015681747 -0.065017508 -0.246102248 22.392305355

Consider 2,000 Random Subsets for V7 LTS

The V7 LTS algorithm is similar to the LMS algorithm. Here is the code:

title2 "***Use 2000 Random Subsets for LTS***";


a = aa[,2:4]; b = aa[,5];
optn = j(9,1,.);
optn[2]= 2; /* ipri */
Example 12.2: Comparison of LMS, V7 LTS, and FAST-LTS F 215

optn[3]= 3; /* ilsq */
optn[8]= 3; /* icov */
optn[9]= 1; /* V7 LTS */

call lts(sc,coef,wgt,optn,b,a);

Output 12.2.5 Iteration History and Optimization Results

***Use 2000 Random Subsets for LTS***

Best
Subset Singular Criterion Percent

500 23 0.099507 25
1000 55 0.087814 50
1500 79 0.084061 75
2000 103 0.084061 100

Observations of Best Subset

10 11 7 15

Estimated Coefficients

VAR1 VAR2 VAR3 Intercep

0.75 0.3333333333 0 -35.70512821

In addition to observations 1, 3, 4, and 21, which were considered outliers in LMS, observations 2 and 13
for LTS have absolute scaled residuals that are larger (but not as significantly as observations 1, 3, 4, and
21) than 2.5 (output not shown). Therefore, the WLS results based on LTS are different from those based
on LMS.
Output 12.2.6 displays the results for the weighted LS regression.

Output 12.2.6 Table of Weighted LS Regression

RLS Parameter Estimates Based on LTS

Approx Pr >
Variable Estimate Std Err t Value |t| Lower WCI Upper WCI

VAR1 0.75694055 0.07860766 9.63 <.0001 0.60287236 0.91100874


VAR2 0.45353029 0.13605033 3.33 0.0067 0.18687654 0.72018405
VAR3 -0.05211 0.05463722 -0.95 0.3607 -0.159197 0.054977
Intercep -34.05751 3.82881873 -8.90 <.0001 -41.561857 -26.553163
216 F Chapter 12: Robust Regression Examples

Output 12.2.6 continued

Cov Matrix of Parameter Estimates

VAR1 VAR2 VAR3 Intercep

VAR1 0.0061791648 -0.005776855 -0.002300587 -0.034290068


VAR2 -0.005776855 0.0185096933 0.0002582502 -0.069740883
VAR3 -0.002300587 0.0002582502 0.0029852254 -0.131487406
Intercep -0.034290068 -0.069740883 -0.131487406 14.659852903

Consider 500 Random Subsets for FAST-LTS

The FAST-LTS algorithm uses only 500 random subsets and gets better optimization results. Here is the
code:

title2 "***Use 500 Random Subsets for FAST-LTS***";


a = aa[,2:4]; b = aa[,5];
optn = j(9,1,.);
optn[2]= 2; /* ipri */
optn[3]= 3; /* ilsq */
optn[8]= 3; /* icov */
optn[9]= 0; /* FAST-LTS */

call lts(sc,coef,wgt,optn,b,a);

For this example, the two LTS algorithms identify the same outliers; however, the FAST-LTS algorithm uses
only 500 random subsets and gets a smaller objective function, as seen in Output 12.2.7. For large data sets,
the advantages of the FAST-LTS algorithm are more obvious.

Output 12.2.7 Optimization Results for FAST-LTS

***Use 500 Random Subsets for FAST-LTS***

5 6 7 8 9 10 11 12 15 16 17 18 19

Estimated Coefficients

VAR1 VAR2 VAR3 Intercep

0.7409210642 0.3915267228 0.0111345398 -37.32332647

Consider All 5,985 Subsets

You now report the results of LMS for all different subsets. Here is the code:

title2 "*** Use All 5985 Subsets***";


a = aa[,2:4]; b = aa[,5];
optn = j(9,1,.);
Example 12.2: Comparison of LMS, V7 LTS, and FAST-LTS F 217

optn[2]= 2; /* ipri */
optn[3]= 3; /* ilsq */
optn[5]= -1; /* nrep: all 5985 subsets */
optn[8]= 3; /* icov */

ods select IterHist0 BestSubset EstCoeff;


call lms(sc,coef,wgt,optn,b,a);

Output 12.2.8 displays the results for LMS.

Output 12.2.8 Iteration History and Optimization Results for LMS

*** Use All 5985 Subsets***

Best
Subset Singular Criterion Percent

1497 36 0.185899 25
2993 87 0.158268 50
4489 149 0.140519 75
5985 266 0.126467 100

Observations of Best Subset

8 10 15 19

Estimated Coefficients

VAR1 VAR2 VAR3 Intercep

0.75 0.5 1.29526E-16 -39.25

Next, report the results of LTS for all different subsets, as follows:

title2 "*** Use All 5985 Subsets***";


a = aa[,2:4]; b = aa[,5];
optn = j(9,1,.);
optn[2]= 2; /* ipri */
optn[3]= 3; /* ilsq */
optn[5]= -1; /* nrep: all 5985 subsets */
optn[8]= 3; /* icov */
optn[9]= 1; /* V7 LTS */

ods select IterHist0 BestSubset EstCoeff;


call lts(sc,coef,wgt,optn,b,a);

Output 12.2.9 displays the results for LTS.


218 F Chapter 12: Robust Regression Examples

Output 12.2.9 Iteration History and Optimization Results for LTS

Best
Subset Singular Criterion Percent

1497 36 0.135449 25
2993 87 0.107084 50
4489 149 0.081536 75
5985 266 0.081536 100

Observations of Best Subset

5 12 17 18

Estimated Coefficients

VAR1 VAR2 VAR3 Intercep

0.7291666667 0.4166666667 -8.54713E-18 -36.22115385

Example 12.3: LMS and LTS Univariate (Location) Problem: Barnett and
Lewis Data

If you do not specify matrix X of the last input argument, the regression problem is reduced to the estimation
problem of the location parameter a. The following example is described in Rousseeuw and Leroy (1987):

title2 "*** Barnett and Lewis (1978) ***";


b = { 3, 4, 7, 8, 10, 949, 951 };

optn = j(9,1,.);
optn[2]= 3; /* ipri */
optn[3]= 3; /* ilsq */
optn[8]= 3; /* icov */

call lms(sc,coef,wgt,optn,b);

Output 12.3.1 shows the results of the unweighted LS regression.


Example 12.3: LMS and LTS Univariate (Location) Problem: Barnett and Lewis Data F 219

Output 12.3.1 Table of Unweighted LS Regression

*** Barnett and Lewis (1978) ***

LS Residuals

N Observed Residual Res / S

1 3.000000 -273.000000 -0.592916


2 4.000000 -272.000000 -0.590744
3 7.000000 -269.000000 -0.584229
4 8.000000 -268.000000 -0.582057
5 10.000000 -266.000000 -0.577713
6 949.000000 673.000000 1.461658
7 951.000000 675.000000 1.466002

MinRes 1st Qu. Median

-273 -272 -268

Mean 3rd Qu. MaxRes

0 -266 675

Output 12.3.2 shows the results for LMS regression.

Output 12.3.2 Table of LMS Results

LMS Residuals

N Observed Residual Res / S

1 3.000000 -2.500000 -0.819232


2 4.000000 -1.500000 -0.491539
3 7.000000 1.500000 0.491539
4 8.000000 2.500000 0.819232
5 10.000000 4.500000 1.474617
6 949.000000 943.500000 309.178127
7 951.000000 945.500000 309.833512

MinRes 1st Qu. Median

-2.5 -1.5 2.5

Mean 3rd Qu. MaxRes

270.5 4.5 945.5

You obtain the LMS location estimate 6:5 compared with the mean 276 (which is the LS estimate of the
location parameter) and the median 8. The scale estimate in the univariate problem is a resistant (high
breakdown) estimator for the dispersion of the data (see Rousseeuw and Leroy (1987)).
220 F Chapter 12: Robust Regression Examples

For weighted LS regression, the last two observations are ignored (that is, given zero weights), as shown in
Output 12.3.3.

Output 12.3.3 Table of Weighted LS Regression

Weighted LS Residuals

N Observed Residual Res / S Weight

1 3.000000 -3.400000 -1.180157 1.000000


2 4.000000 -2.400000 -0.833052 1.000000
3 7.000000 0.600000 0.208263 1.000000
4 8.000000 1.600000 0.555368 1.000000
5 10.000000 3.600000 1.249578 1.000000
6 949.000000 942.600000 327.181236 0
7 951.000000 944.600000 327.875447 0

MinRes 1st Qu. Median

-3.4 -2.4 1.6

Mean 3rd Qu. MaxRes

269.6 3.6 944.6

Use the following code to obtain results from LTS:

title2 "*** Barnett and Lewis (1978) ***";


b = { 3, 4, 7, 8, 10, 949, 951 };

optn = j(9,1,.);
optn[2]= 3; /* ipri */
optn[3]= 3; /* ilsq */
optn[8]= 3; /* icov */

call lts(sc,coef,wgt,optn,b);

The results for LTS are similar to those reported for LMS in Rousseeuw and Leroy (1987), as shown in
Output 12.3.4.
Using MVE and MCD F 221

Output 12.3.4 Table of LTS Results

*** Barnett and Lewis (1978) ***

LTS Residuals

N Observed Residual Res / S

1 3.000000 -2.500000 -0.819232


2 4.000000 -1.500000 -0.491539
3 7.000000 1.500000 0.491539
4 8.000000 2.500000 0.819232
5 10.000000 4.500000 1.474617
6 949.000000 943.500000 309.178127
7 951.000000 945.500000 309.833512

MinRes 1st Qu. Median

-2.5 -1.5 2.5

Mean 3rd Qu. MaxRes

270.5 4.5 945.5

Since nonzero weights are chosen for the same observations as with LMS, the WLS results based on LTS
agree with those based on LMS (shown previously in Output 12.3.3).
In summary, you obtain the following estimates for the location parameter:

 LS estimate (unweighted mean) = 276

 Median = 8

 LMS estimate = 5.5

 LTS estimate = 5.5

 WLS estimate (weighted mean based on LMS or LTS) = 6.4

Using MVE and MCD

The SCATMVE and SCATMCD modules are used in these examples for plotting the results. The PRIMVE
module can be used for printing results. These routines are in the robustmc.sas file that is contained in
the sample library.
222 F Chapter 12: Robust Regression Examples

Example 12.4: Brainlog Data

The following data are the body weights (in kilograms) and brain weights (in grams) of N D 28 animals
as reported by Jerison (1973) and as analyzed in Rousseeuw and Leroy (1987). Instead of the original data,
the following example uses the logarithms of the measurements of the two variables.

title "*** Brainlog Data: Do MCD, MVE ***";


aa={ 1.303338E-001 9.084851E-001 ,
2.6674530 2.6263400 ,
1.5602650 2.0773680 ,
1.4418520 2.0606980 ,
1.703332E-002 7.403627E-001 ,
4.0681860 1.6989700 ,
3.4060290 3.6630410 ,
2.2720740 2.6222140 ,
2.7168380 2.8162410 ,
1.0000000 2.0606980 ,
5.185139E-001 1.4082400 ,
2.7234560 2.8325090 ,
2.3159700 2.6085260 ,
1.7923920 3.1205740 ,
3.8230830 3.7567880 ,
3.9731280 1.8450980 ,
8.325089E-001 2.2528530 ,
1.5440680 1.7481880 ,
-9.208187E-001 .0000000 ,
-1.6382720 -3.979400E-001 ,
3.979400E-001 1.0827850 ,
1.7442930 2.2430380 ,
2.0000000 2.1959000 ,
1.7173380 2.6434530 ,
4.9395190 2.1889280 ,
-5.528420E-001 2.787536E-001 ,
-9.136401E-001 4.771213E-001 ,
2.2833010 2.2552720 };

By default, the MVE subroutine uses only 1500 randomly selected subsets rather than all subsets. The
following specification of the options vector requires that all 3276 subsets of 3 cases out of 28 cases are
generated and evaluated:

title2 "***MVE for BrainLog Data***";


title3 "*** Use All Subsets***";
optn = j(9,1,.);
optn[1]= 3; /* ipri */
optn[2]= 1; /* pcov: print COV */
optn[3]= 1; /* pcor: print CORR */
optn[5]= -1; /* nrep: all subsets */

call mve(sc,xmve,dist,optn,aa);

Specifying OPTN[1]=3, OPTN[2]=1, and OPTN[3]=1 requests that all output be printed. Output 12.4.1
shows the classical scatter and correlation matrix.
Example 12.4: Brainlog Data F 223

Output 12.4.1 Some Simple Statistics

*** Brainlog Data: Do MCD, MVE ***


***MVE for BrainLog Data***
*** Use All Subsets***

Classical Covariance Matrix

VAR1 VAR2

VAR1 2.6816512357 1.3300846932


VAR2 1.3300846932 1.0857537552

Classical Correlation Matrix

VAR1 VAR2

VAR1 1 0.7794934643
VAR2 0.7794934643 1

Classical Mean

VAR1 1.6378572186
VAR2 1.9219465964

Output 12.4.2 shows the results of the combinatoric optimization (complete subset sampling).

Output 12.4.2 Iteration History for MVE

Best
Subset Singular Criterion Percent

819 0 0.439709 25
1638 0 0.439709 50
2457 0 0.439709 75
3276 0 0.439709 100

Observations of Best Subset

1 22 28

Initial MVE Location


Estimates

VAR1 1.3859759333
VAR2 1.8022650333

Initial MVE Scatter Matrix

VAR1 VAR2

VAR1 4.9018525125 3.2937139101


VAR2 3.2937139101 2.3400650932
224 F Chapter 12: Robust Regression Examples

Output 12.4.3 shows the optimization results after local improvement.

Output 12.4.3 Table of MVE Results

Robust MVE Location


Estimates

VAR1 1.29528238
VAR2 1.8733722792

Robust MVE Scatter Matrix

VAR1 VAR2

VAR1 2.0566592937 1.5290250167


VAR2 1.5290250167 1.2041353589

Eigenvalues of Robust
Scatter Matrix

VAR1 3.2177274012
VAR2 0.0430672514

Robust Correlation Matrix

VAR1 VAR2

VAR1 1 0.9716184659
VAR2 0.9716184659 1

Output 12.4.4 presents a table that contains the classical Mahalanobis distances, the robust distances, and
the weights identifying the outlier observations.
Example 12.4: Brainlog Data F 225

Output 12.4.4 Mahalanobis and Robust Distances

Classical Distances and Robust (Rousseeuw) Distances


Unsquared Mahalanobis Distance and
Unsquared Rousseeuw Distance of Each Observation
Mahalanobis Robust
N Distances Distances Weight

1 1.006591 0.897076 1.000000


2 0.695261 1.405302 1.000000
3 0.300831 0.186726 1.000000
4 0.380817 0.318701 1.000000
5 1.146485 1.135697 1.000000
6 2.644176 8.828036 0
7 1.708334 1.699233 1.000000
8 0.706522 0.686680 1.000000
9 0.858404 1.084163 1.000000
10 0.798698 1.580835 1.000000
11 0.686485 0.693346 1.000000
12 0.874349 1.071492 1.000000
13 0.677791 0.717545 1.000000
14 1.721526 3.398698 0
15 1.761947 1.762703 1.000000
16 2.369473 7.999472 0
17 1.222253 2.805954 0
18 0.203178 1.207332 1.000000
19 1.855201 1.773317 1.000000
20 2.266268 2.074971 1.000000
21 0.831416 0.785954 1.000000
22 0.416158 0.342200 1.000000
23 0.264182 0.918383 1.000000
24 1.046120 1.782334 1.000000
25 2.911101 9.565443 0
26 1.586458 1.543748 1.000000
27 1.582124 1.808423 1.000000
28 0.394664 1.523235 1.000000

Again, you can call the subroutine SCATMVE(), which is included in the sample library in the file
robustmc.sas, to plot the classical and robust confidence ellipsoids, as follows:

optn = j(9,1,.); optn[5]= -1;


vnam = { "Log Body Wgt","Log Brain Wgt" };
filn = "brlmve";
titl = "BrainLog Data: MVE Use All Subsets";
%include 'robustmc.sas';
call scatmve(2,optn,.9,aa,vnam,titl,1,filn);

The plot is shown in Figure 12.4.5.


226 F Chapter 12: Robust Regression Examples

Output 12.4.5 BrainLog Data: Classical and Robust Ellipsoid(MVE)

MCD is another subroutine that can be used to compute the robust location and the robust covariance of
multivariate data sets. Here is the code:

title "*** Brainlog Data: Do MCD, MVE ***";


aa={ 1.303338E-001 9.084851E-001 ,
2.6674530 2.6263400 ,
1.5602650 2.0773680 ,
1.4418520 2.0606980 ,
1.703332E-002 7.403627E-001 ,
4.0681860 1.6989700 ,
3.4060290 3.6630410 ,
2.2720740 2.6222140 ,
2.7168380 2.8162410 ,
1.0000000 2.0606980 ,
5.185139E-001 1.4082400 ,
2.7234560 2.8325090 ,
2.3159700 2.6085260 ,
1.7923920 3.1205740 ,
3.8230830 3.7567880 ,
3.9731280 1.8450980 ,
8.325089E-001 2.2528530 ,
1.5440680 1.7481880 ,
-9.208187E-001 .0000000 ,
-1.6382720 -3.979400E-001 ,
3.979400E-001 1.0827850 ,
1.7442930 2.2430380 ,
2.0000000 2.1959000 ,
1.7173380 2.6434530 ,
4.9395190 2.1889280 ,
-5.528420E-001 2.787536E-001 ,
-9.136401E-001 4.771213E-001 ,
2.2833010 2.2552720 };

title2 "***MCD for BrainLog Data***";


title3 "*** Use 500 Random Subsets***";
Example 12.4: Brainlog Data F 227

optn = j(9,1,.);
optn[1]= 3; /* ipri */
optn[2]= 1; /* pcov: print COV */
optn[3]= 1; /* pcor: print CORR */

call mcd(sc,xmve,dist,optn,aa);

Similarly, specifying OPTN[1]=3, OPTN[2]=1, and OPTN[3]=1 requests that all output be printed.
Output 12.4.6 shows the results of the optimization.

Output 12.4.6 Results of the Optimization

*** Brainlog Data: Do MCD, MVE ***


***MCD for BrainLog Data***
*** Use 500 Random Subsets***

1 2 3 4 5 8 9 11 12 13 18 21 22 23 28

MCD Location Estimate

VAR1 VAR2

1.622226068 2.0150777867

MCD Scatter Matrix Estimate

VAR1 VAR2

VAR1 0.8973945995 0.6424456706


VAR2 0.6424456706 0.4793505736

Output 12.4.7 shows the reweighted results after removing outliers.

Output 12.4.7 Final Reweighted MCD Results

Reweighted Location Estimate

VAR1 VAR2

1.3154029661 1.8568731174

Reweighted Scatter Matrix

VAR1 VAR2

VAR1 2.139986054 1.6068556533


VAR2 1.6068556533 1.2520384784

Eigenvalues

VAR1 VAR2

3.363074897 0.0289496354
228 F Chapter 12: Robust Regression Examples

Output 12.4.7 continued

Reweighted Correlation Matrix

VAR1 VAR2

VAR1 1 0.9816633012
VAR2 0.9816633012 1

Output 12.4.8 presents a table that contains the classical Mahalanobis distances, the robust distances, and
the weights identifying the outlier observations.

Output 12.4.8 Mahalanobis and Robust Distances (MCD)

Classical Distances and Robust (Rousseeuw) Distances


Unsquared Mahalanobis Distance and
Unsquared Rousseeuw Distance of Each Observation
Mahalanobis Robust
N Distances Distances Weight

1 1.006591 0.855347 1.000000


2 0.695261 1.477050 1.000000
3 0.300831 0.239828 1.000000
4 0.380817 0.517719 1.000000
5 1.146485 1.108362 1.000000
6 2.644176 10.599341 0
7 1.708334 1.808455 1.000000
8 0.706522 0.690099 1.000000
9 0.858404 1.052423 1.000000
10 0.798698 2.077131 1.000000
11 0.686485 0.888545 1.000000
12 0.874349 1.035824 1.000000
13 0.677791 0.683978 1.000000
14 1.721526 4.257963 0
15 1.761947 1.716065 1.000000
16 2.369473 9.584992 0
17 1.222253 3.571700 0
18 0.203178 1.323783 1.000000
19 1.855201 1.741064 1.000000
20 2.266268 2.026528 1.000000
21 0.831416 0.743545 1.000000
22 0.416158 0.419923 1.000000
23 0.264182 0.944610 1.000000
24 1.046120 2.289334 1.000000
25 2.911101 11.471953 0
26 1.586458 1.518721 1.000000
27 1.582124 2.054593 1.000000
28 0.394664 1.675651 1.000000

You can call the subroutine SCATMCD(), which is included in the sample library in file robustmc.sas, to
plot the classical and robust confidence ellipsoids. Here is the code:

optn = j(9,1,.); optn[5]= -1;


Example 12.5: Stackloss Data F 229

vnam = { "Log Body Wgt","Log Brain Wgt" };


filn = "brlmcd";
titl = "BrainLog Data: MCD";
%include 'robustmc.sas';
call scatmcd(2,optn,.9,aa,vnam,titl,1,filn);

The plot is shown in Figure 12.4.9.

Output 12.4.9 BrainLog Data: Classical and Robust Ellipsoid (MCD)

Example 12.5: Stackloss Data

The following example analyzes the three regressors of Brownlee (1965) stackloss data. By default, the
MVE subroutine tries only 2000 randomly selected subsets in its search. There are, in total, 5985 subsets of
4 cases out of 21 cases. Here is the code:

title2 "***MVE for Stackloss Data***";


title3 "*** Use All Subsets***";
aa = { 1 80 27 89 42,
1 80 27 88 37,
1 75 25 90 37,
1 62 24 87 28,
1 62 22 87 18,
1 62 23 87 18,
1 62 24 93 19,
1 62 24 93 20,
1 58 23 87 15,
1 58 18 80 14,
1 58 18 89 14,
1 58 17 88 13,
1 58 18 82 11,
1 58 19 93 12,
1 50 18 89 8,
1 50 18 86 7,
1 50 19 72 8,
230 F Chapter 12: Robust Regression Examples

1 50 19 79 8,
1 50 20 80 9,
1 56 20 82 15,
1 70 20 91 15 };
a = aa[,2:4];
optn = j(9,1,.);
optn[1]= 2; /* ipri */
optn[2]= 1; /* pcov: print COV */
optn[3]= 1; /* pcor: print CORR */
optn[5]= -1; /* nrep: use all subsets */

call mve(sc,xmve,dist,optn,a);

Output 12.5.1 of the output shows the classical scatter and correlation matrix.

Output 12.5.1 Some Simple Statistics

*** Brainlog Data: Do MCD, MVE ***


***MVE for Stackloss Data***
*** Use All Subsets***

Classical Covariance Matrix

VAR1 VAR2 VAR3

VAR1 84.057142857 22.657142857 24.571428571


VAR2 22.657142857 9.9904761905 6.6214285714
VAR3 24.571428571 6.6214285714 28.714285714

Classical Correlation Matrix

VAR1 VAR2 VAR3

VAR1 1 0.781852333 0.5001428749


VAR2 0.781852333 1 0.3909395378
VAR3 0.5001428749 0.3909395378 1

Classical Mean

VAR1 60.428571429
VAR2 21.095238095
VAR3 86.285714286

Output 12.5.2 shows the results of the optimization (complete subset sampling).

Output 12.5.2 Iteration History

Best
Subset Singular Criterion Percent

1497 22 253.312431 25
2993 46 224.084073 50
4489 77 165.830053 75
5985 156 165.634363 100
Example 12.5: Stackloss Data F 231

Output 12.5.2 continued

Observations of Best Subset

7 10 14 20

Initial MVE Location


Estimates

VAR1 58.5
VAR2 20.25
VAR3 87

Initial MVE Scatter Matrix

VAR1 VAR2 VAR3

VAR1 34.829014749 28.413143611 62.32560534


VAR2 28.413143611 38.036950318 58.659393261
VAR3 62.32560534 58.659393261 267.63348175

Output 12.5.3 shows the optimization results after local improvement.

Output 12.5.3 Table of MVE Results

Robust MVE Location


Estimates

VAR1 56.705882353
VAR2 20.235294118
VAR3 85.529411765

Robust MVE Scatter Matrix

VAR1 VAR2 VAR3

VAR1 23.470588235 7.5735294118 16.102941176


VAR2 7.5735294118 6.3161764706 5.3676470588
VAR3 16.102941176 5.3676470588 32.389705882

Eigenvalues of Robust
Scatter Matrix

VAR1 46.597431018
VAR2 12.155938483
VAR3 3.423101087

Robust Correlation Matrix

VAR1 VAR2 VAR3

VAR1 1 0.6220269501 0.5840361335


VAR2 0.6220269501 1 0.375278187
VAR3 0.5840361335 0.375278187 1
232 F Chapter 12: Robust Regression Examples

Output 12.5.4 presents a table that contains the classical Mahalanobis distances, the robust distances, and
the weights identifying the outlying observations (that is, the leverage points when explaining y with these
three regressor variables).

Output 12.5.4 Mahalanobis and Robust Distances

Classical Distances and Robust (Rousseeuw) Distances


Unsquared Mahalanobis Distance and
Unsquared Rousseeuw Distance of Each Observation
Mahalanobis Robust
N Distances Distances Weight

1 2.253603 5.528395 0
2 2.324745 5.637357 0
3 1.593712 4.197235 0
4 1.271898 1.588734 1.000000
5 0.303357 1.189335 1.000000
6 0.772895 1.308038 1.000000
7 1.852661 1.715924 1.000000
8 1.852661 1.715924 1.000000
9 1.360622 1.226680 1.000000
10 1.745997 1.936256 1.000000
11 1.465702 1.493509 1.000000
12 1.841504 1.913079 1.000000
13 1.482649 1.659943 1.000000
14 1.778785 1.689210 1.000000
15 1.690241 2.230109 1.000000
16 1.291934 1.767582 1.000000
17 2.700016 2.431021 1.000000
18 1.503155 1.523316 1.000000
19 1.593221 1.710165 1.000000
20 0.807054 0.675124 1.000000
21 2.176761 3.657281 0

The following specification generates three bivariate plots of the classical and robust tolerance ellipsoids.
They are shown in Figure 12.5.5, Figure 12.5.6, and Figure 12.5.7, one plot for each pair of variables.

optn = j(9,1,.); optn[5]= -1;


vnam = { "Rate", "Temperature", "AcidConcent" };
filn = "stlmve";
titl = "Stackloss Data: Use All Subsets";
%include 'robustmc.sas';
call scatmve(2,optn,.9,a,vnam,titl,1,filn);
Example 12.5: Stackloss Data F 233

Output 12.5.5 Stackloss Data: Rate vs. Temperature (MVE)

Output 12.5.6 Stackloss Data: Rate vs. Acid Concentration (MVE)


234 F Chapter 12: Robust Regression Examples

Output 12.5.7 Stackloss Data: Temperature vs. Acid Concentration (MVE)

You can also use the MCD method for the stackloss data as follows:

title2 "***MCD for Stackloss Data***";


title3 "*** Use 500 Random Subsets***";
a = aa[,2:4];
optn = j(8,1,.);
optn[1]= 2; /* ipri */
optn[2]= 1; /* pcov: print COV */
optn[3]= 1; /* pcor: print CORR */
optn[5]= -1 ; /* nrep: use all subsets */

call mcd(sc,xmcd,dist,optn,a);

The optimization results are displayed in Output 12.5.8. The reweighted results are displayed in Out-
put 12.5.9.

Output 12.5.8 MCD Results of Optimization

*** Brainlog Data: Do MCD, MVE ***


***MCD for Stackloss Data***
*** Use 500 Random Subsets***

4 5 6 7 8 9 10 11 12 13 14 20

MCD Location Estimate

VAR1 VAR2 VAR3

59.5 20.833333333 87.333333333


Example 12.5: Stackloss Data F 235

Output 12.5.8 continued

MCD Scatter Matrix Estimate

VAR1 VAR2 VAR3

VAR1 5.1818181818 4.8181818182 4.7272727273


VAR2 4.8181818182 7.6060606061 5.0606060606
VAR3 4.7272727273 5.0606060606 19.151515152

MCD Correlation Matrix

VAR1 VAR2 VAR3

VAR1 1 0.7674714142 0.4745347313


VAR2 0.7674714142 1 0.4192963398
VAR3 0.4745347313 0.4192963398 1

Consistent Scatter Matrix

VAR1 VAR2 VAR3

VAR1 8.6578437815 8.0502757968 7.8983838007


VAR2 8.0502757968 12.708297013 8.4553211199
VAR3 7.8983838007 8.4553211199 31.998580526

Output 12.5.9 Final Reweighted MCD Results

Reweighted Location Estimate

VAR1 VAR2 VAR3

59.5 20.833333333 87.333333333

Reweighted Scatter Matrix

VAR1 VAR2 VAR3

VAR1 5.1818181818 4.8181818182 4.7272727273


VAR2 4.8181818182 7.6060606061 5.0606060606
VAR3 4.7272727273 5.0606060606 19.151515152

Eigenvalues

VAR1 VAR2 VAR3

23.191069268 7.3520037086 1.3963209628

Reweighted Correlation Matrix

VAR1 VAR2 VAR3

VAR1 1 0.7674714142 0.4745347313


VAR2 0.7674714142 1 0.4192963398
VAR3 0.4745347313 0.4192963398 1
236 F Chapter 12: Robust Regression Examples

The MCD robust distances and outlying diagnostic are displayed in Output 12.5.10. MCD identifies more
leverage points than MVE.

Output 12.5.10 MCD Robust Distances

Classical Distances and Robust (Rousseeuw) Distances


Unsquared Mahalanobis Distance and
Unsquared Rousseeuw Distance of Each Observation
Mahalanobis Robust
N Distances Distances Weight

1 2.253603 12.173282 0
2 2.324745 12.255677 0
3 1.593712 9.263990 0
4 1.271898 1.401368 1.000000
5 0.303357 1.420020 1.000000
6 0.772895 1.291188 1.000000
7 1.852661 1.460370 1.000000
8 1.852661 1.460370 1.000000
9 1.360622 2.120590 1.000000
10 1.745997 1.809708 1.000000
11 1.465702 1.362278 1.000000
12 1.841504 1.667437 1.000000
13 1.482649 1.416724 1.000000
14 1.778785 1.988240 1.000000
15 1.690241 5.874858 0
16 1.291934 5.606157 0
17 2.700016 6.133319 0
18 1.503155 5.760432 0
19 1.593221 6.156248 0
20 0.807054 2.172300 1.000000
21 2.176761 7.622769 0

Similarly, you can use the SCATMCD routine to generate three bivariate plots of the classical and robust
tolerance ellipsoids, one plot for each pair of variables. Here is the code:

optn = j(9,1,.); optn[5]= -1;


vnam = { "Rate", "Temperature", "AcidConcent" };
filn = "stlmcd";
titl = "Stackloss Data: Use All Subsets";
%include 'robustmc.sas';
call scatmcd(2,optn,.9,a,vnam,titl,1,filn);

Figure 12.5.11, Figure 12.5.12, and Figure 12.5.13 display these plots.
Example 12.5: Stackloss Data F 237

Output 12.5.11 Stackloss Data: Rate vs. Temperature (MCD)

Output 12.5.12 Stackloss Data: Rate vs. Acid Concentration (MCD)


238 F Chapter 12: Robust Regression Examples

Output 12.5.13 Stackloss Data: Temperature vs. Acid Concentration (MCD)

Combining Robust Residual and Robust Distance

This section is based entirely on Rousseeuw and Van Zomeren (1990). Observations xi , which are far
away from most of the other observations, are called leverage points. One classical method inspects the
Mahalanobis distances MDi to find outliers xi :
q
MDi D .xi /C 1 .xi /T

where C is the classical sample covariance matrix.


Note that the MVE subroutine prints the classical Mahalanobis distances MDi together with the robust
distances RDi . In classical linear regression, the diagonal elements hi i of the hat matrix

H D X.XT X/ 1
XT

are used to identify leverage points. Rousseeuw and Van Zomeren (1990) report the following monotone
relationship between the hi i and MDi :

.MDi /2 1
hi i D C
N 1 n
They point out that neither the MDi nor the hi i are entirely safe for detecting leverage points reliably.
Multiple outliers do not necessarily have large MDi values because of the masking effect.
The definition of a leverage point is, therefore, based entirely on the outlyingness of xi and is not related
to the response value yi . By including the yi value in the definition, Rousseeuw and Van Zomeren (1990)
distinguish between the following:

 Good leverage points are points .xi ; yi / that are close to the regression plane; that is, good leverage
points improve the precision of the regression coefficients.
Example 12.6: Hawkins-Bradu-Kass Data F 239

 Bad leverage points are points .xi ; yi / that are far from the regression plane; that is, bad leverage
points reduce the precision of the regression coefficients.

Rousseeuw and Van Zomeren (1990) propose to plot the standardized residuals of robust regression (LMS or
LTS) versus the robust distances RDi obtained from MVE. Two horizontal lines that correspond to residual
values of C2:5 and 2:5 q are useful to distinguish between small and large residuals, and one vertical line
that corresponds to the 2n;:975 is used to distinguish between small and large distances.

Example 12.6: Hawkins-Bradu-Kass Data

The first 14 observations of the following data set (see Hawkins, Bradu, and Kass (1984)) are leverage
points; however, only observations 12, 13, and 14 have large hi i , and only observations 12 and 14 have
large MDi values.

title "Hawkins, Bradu, Kass (1984) Data";


aa = { 1 10.1 19.6 28.3 9.7,
2 9.5 20.5 28.9 10.1,
3 10.7 20.2 31.0 10.3,
4 9.9 21.5 31.7 9.5,
5 10.3 21.1 31.1 10.0,
6 10.8 20.4 29.2 10.0,
7 10.5 20.9 29.1 10.8,
8 9.9 19.6 28.8 10.3,
9 9.7 20.7 31.0 9.6,
10 9.3 19.7 30.3 9.9,
11 11.0 24.0 35.0 -0.2,
12 12.0 23.0 37.0 -0.4,
13 12.0 26.0 34.0 0.7,
14 11.0 34.0 34.0 0.1,
15 3.4 2.9 2.1 -0.4,
16 3.1 2.2 0.3 0.6,
17 0.0 1.6 0.2 -0.2,
18 2.3 1.6 2.0 0.0,
19 0.8 2.9 1.6 0.1,
20 3.1 3.4 2.2 0.4,
21 2.6 2.2 1.9 0.9,
22 0.4 3.2 1.9 0.3,
23 2.0 2.3 0.8 -0.8,
24 1.3 2.3 0.5 0.7,
25 1.0 0.0 0.4 -0.3,
26 0.9 3.3 2.5 -0.8,
27 3.3 2.5 2.9 -0.7,
28 1.8 0.8 2.0 0.3,
29 1.2 0.9 0.8 0.3,
30 1.2 0.7 3.4 -0.3,
31 3.1 1.4 1.0 0.0,
32 0.5 2.4 0.3 -0.4,
33 1.5 3.1 1.5 -0.6,
34 0.4 0.0 0.7 -0.7,
240 F Chapter 12: Robust Regression Examples

35 3.1 2.4 3.0 0.3,


36 1.1 2.2 2.7 -1.0,
37 0.1 3.0 2.6 -0.6,
38 1.5 1.2 0.2 0.9,
39 2.1 0.0 1.2 -0.7,
40 0.5 2.0 1.2 -0.5,
41 3.4 1.6 2.9 -0.1,
42 0.3 1.0 2.7 -0.7,
43 0.1 3.3 0.9 0.6,
44 1.8 0.5 3.2 -0.7,
45 1.9 0.1 0.6 -0.5,
46 1.8 0.5 3.0 -0.4,
47 3.0 0.1 0.8 -0.9,
48 3.1 1.6 3.0 0.1,
49 3.1 2.5 1.9 0.9,
50 2.1 2.8 2.9 -0.4,
51 2.3 1.5 0.4 0.7,
52 3.3 0.6 1.2 -0.5,
53 0.3 0.4 3.3 0.7,
54 1.1 3.0 0.3 0.7,
55 0.5 2.4 0.9 0.0,
56 1.8 3.2 0.9 0.1,
57 1.8 0.7 0.7 0.7,
58 2.4 3.4 1.5 -0.1,
59 1.6 2.1 3.0 -0.3,
60 0.3 1.5 3.3 -0.9,
61 0.4 3.4 3.0 -0.3,
62 0.9 0.1 0.3 0.6,
63 1.1 2.7 0.2 -0.3,
64 2.8 3.0 2.9 -0.5,
65 2.0 0.7 2.7 0.6,
66 0.2 1.8 0.8 -0.9,
67 1.6 2.0 1.2 -0.7,
68 0.1 0.0 1.1 0.6,
69 2.0 0.6 0.3 0.2,
70 1.0 2.2 2.9 0.7,
71 2.2 2.5 2.3 0.2,
72 0.6 2.0 1.5 -0.2,
73 0.3 1.7 2.2 0.4,
74 0.0 2.2 1.6 -0.9,
75 0.3 0.4 2.6 0.2 };

a = aa[,2:4]; b = aa[,5];

The data are also listed in Rousseeuw and Leroy (1987).


The complete enumeration must inspect 1,215,450 subsets.
Output 12.6.1 displays the iteration history for MVE.

optn = j(9,1,.);
optn[1]= 3; /* ipri */
optn[2]= 1; /* pcov: print COV */
optn[3]= 1; /* pcor: print CORR */
Example 12.6: Hawkins-Bradu-Kass Data F 241

optn[5]= -1; /* nrep: all subsets */

call mve(sc,xmve,dist,optn,a);

Output 12.6.1 Iteration History for MVE

Hawkins, Bradu, Kass (1984) Data

Best
Subset Singular Criterion Percent

121545 0 51.104276 10
243090 1 51.104276 20
364635 1 51.104276 30
486180 2 51.104276 40
607725 3 51.104276 50
729270 9 6.271725 60
850815 35 6.271725 70
972360 55 5.912308 80
1093905 76 5.912308 90
1215450 114 5.912308 100

Output 12.6.2 reports the robust parameter estimates for MVE.

Output 12.6.2 Robust Location Estimates

Robust MVE Location


Estimates

VAR1 1.5133333333
VAR2 1.8083333333
VAR3 1.7016666667

Robust MVE Scatter Matrix

VAR1 VAR2 VAR3

VAR1 1.1143954802 0.0939548023 0.1416723164


VAR2 0.0939548023 1.1231497175 0.1174435028
VAR3 0.1416723164 0.1174435028 1.0747429379

Output 12.6.3 reports the eigenvalues of the robust scatter matrix and the robust correlation matrix.

Output 12.6.3 MVE Scatter Matrix

Eigenvalues of Robust
Scatter Matrix

VAR1 1.3396371545
VAR2 1.0281247572
VAR3 0.9445262239
242 F Chapter 12: Robust Regression Examples

Output 12.6.3 continued

Robust Correlation Matrix

VAR1 VAR2 VAR3

VAR1 1 0.0839808925 0.1294532696


VAR2 0.0839808925 1 0.1068951177
VAR3 0.1294532696 0.1068951177 1

Output 12.6.4 shows a portion of the classical Mahalanobis and robust distances obtained by complete
enumeration. The first 14 observations are recognized as outliers (leverage points).

Output 12.6.4 Mahalanobis and Robust Distances

Classical Distances and Robust (Rousseeuw) Distances


Unsquared Mahalanobis Distance and
Unsquared Rousseeuw Distance of Each Observation
Mahalanobis Robust
N Distances Distances Weight

1 1.916821 29.541649 0
2 1.855757 30.344481 0
3 2.313658 31.985694 0
4 2.229655 33.011768 0
5 2.100114 32.404938 0
6 2.146169 30.683153 0
7 2.010511 30.794838 0
8 1.919277 29.905756 0
9 2.221249 32.092048 0
10 2.333543 31.072200 0
11 2.446542 36.808021 0
12 3.108335 38.071382 0
13 2.662380 37.094539 0
14 6.381624 41.472255 0
15 1.815487 1.994672 1.000000
16 2.151357 2.202278 1.000000
17 1.384915 1.918208 1.000000
18 0.848155 0.819163 1.000000
19 1.148941 1.288387 1.000000
20 1.591431 2.046703 1.000000
21 1.089981 1.068327 1.000000
22 1.548776 1.768905 1.000000
23 1.085421 1.166951 1.000000
24 0.971195 1.304648 1.000000
25 0.799268 2.030417 1.000000
26 1.168373 1.727131 1.000000
27 1.449625 1.983831 1.000000
28 0.867789 1.073856 1.000000
29 0.576399 1.168060 1.000000
30 1.568868 2.091386 1.000000
... ... ... ...
75 1.899178 2.042560 1.000000
Example 12.6: Hawkins-Bradu-Kass Data F 243

The graphs in Figure 12.6.5 and Figure 12.6.6 show the following:

 the plot of standardized LMS residuals vs. robust distances RDi

 the plot of standardized LS residuals vs. Mahalanobis distances MDi

The graph identifies the four good leverage points 11, 12, 13, and 14, which have small standardized LMS
residuals but large robust distances, and the 10 bad leverage points 1; : : : ; 10, which have large standardized
LMS residuals and large robust distances.

Output 12.6.5 Hawkins-Bradu-Kass Data: LMS Residuals vs. Robust Distances


244 F Chapter 12: Robust Regression Examples

Output 12.6.6 Hawkins-Bradu-Kass Data: LS Residuals vs. Mahalanobis Distances

Example 12.7: Stackloss Data

The graphs in Figure 12.7.1 and Figure 12.7.2 show the following:

 the plot of standardized LMS residuals vs. robust distances RDi

 the plot of standardized LS residuals vs. Mahalanobis distances MDi

In the first plot, you see that case 4 is a regression outlier but not a leverage point, so it is a vertical outlier.
Cases 1, 3, and 21 are bad leverage points, whereas case 2 is a good leverage point. Note that case 21 lies
near the boundary line between vertical outliers and bad leverage points and that case 2 is very close to the
boundary between good and bad leverage points.
Example 12.7: Stackloss Data F 245

Output 12.7.1 Stackloss Data: LMS Residuals vs. Robust Distances


246 F Chapter 12: Robust Regression Examples

Output 12.7.2 Stackloss Data: LS Residuals vs. Mahalanobis Distances

References
Afifi, A. A. and Azen, S. P. (1972), Statistical Analysis: A Computer-Oriented Approach, New York: Aca-
demic Press.
Barnett, V. and Lewis, T. (1978), Outliers in Statistical Data, New York: John Wiley & Sons.
Barreto, H. and Maharry, D. (2006), Least Median of Squares and Regression Through the Origin, Com-
putatitional Statistics and Data Analysis, 50, 13911397.
Brownlee, K. A. (1965), Statistical Theory and Methodology in Science and Engineering, New York: John
Wiley & Sons.
Chen, C. (2002), Developments in Robust Statistics: International Conference on Robust Statistics, chapter
Robust Tools in SAS, 125133, Heidelberg: Springer-Verlag.
Ezekiel, M. and Fox, K. A. (1959), Methods of Correlation and Regression Analysis, New York: John Wiley
& Sons, Inc.
Hawkins, D. M., Bradu, D., and Kass, G. V. (1984), Location of Several Outliers in Multiple Regression
Data Using Elemental Sets, Technometrics, 26, 197208.
Humphreys, R. M. (1978), Studies of Luminous Stars in Nearby Galaxies. I. Supergiants and O Stars in
the Milky Way, The Astrophysical Journal Supplement Series, 38, 309350.
References F 247

Jerison, H. J. (1973), Evolution of the Brain and Intelligence, New York: Academic Press.

Osborne, M. R. (1985), Finite Algorithms in Optimization and Data Analysis, New York: John Wiley &
Sons, Inc.

Prescott, P. (1975), An Approximate Test for Outliers in Linear Models, Technometrics, 17, 129132.

Rousseeuw, P. J. (1984), Least Median of Squares Regression, Journal of the American Statistical Asso-
ciation, 79, 871880.

Rousseeuw, P. J. (1985), Multivariate Estimation with High Breakdown Point, in W. Grossmann, G. Pflug,
I. Vincze, and W. Wertz, eds., Mathematical Statistics and Applications, 283297, Dordrecht: Reidel
Publishing.

Rousseeuw, P. J. and Hubert, M. (1996), Recent Development in PROGRESS, Computational Statistics


and Data Analysis, 21, 6785.

Rousseeuw, P. J. and Leroy, A. M. (1987), Robust Regression and Outlier Detection, New York: John Wiley
& Sons.

Rousseeuw, P. J. and Van Driessen, K. (1999), A Fast Algorithm for the Minimum Covariance Determinant
Estimator, Technometrics, 41, 212223.

Rousseeuw, P. J. and Van Driessen, K. (2000), Data Analysis: Scientific Modeling and Practical Applica-
tion, chapter An Algorithm for Positive-Breakdown Regression Based on Concentration Steps, 335346,
New York: Springer Verlag.

Rousseeuw, P. J. and Van Zomeren, B. C. (1990), Unmasking Multivariate Outliers and Leverage Points,
Journal of the American Statistical Association, 85, 633639.

Vansina, F. and De Greve, J. P. (1982), Close Binary Systems Before and After Mass Transfer, Astro-
physics and Space Science, 87, 377401.
248
Chapter 13

Time Series Analysis and Examples

Contents
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
Basic Time Series Subroutines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
Time Series Analysis and Control Subroutines . . . . . . . . . . . . . . . . . . . . . . . . . 253
Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Example 13.1: VAR Estimation and Variance Decomposition . . . . . . . . . . . . . 303
Kalman Filter Subroutines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
Example 13.2: Kalman Filtering: Likelihood Function Evaluation . . . . . . . . . . . 310
Example 13.3: Kalman Filtering: SSM Estimation With the EM Algorithm . . . . . . 314
Example 13.4: Diffuse Kalman Filtering . . . . . . . . . . . . . . . . . . . . . . . . 319
Vector Time Series Analysis Subroutines . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Fractionally Integrated Time Series Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 327
Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331

Overview

This chapter describes SAS/IML subroutines related to univariate, multivariate, and fractional time series
analysis, and subroutines for Kalman filtering and smoothing. These subroutines can be used in analyzing
economic and financial time series. You can develop a model of univariate time series and a model of the
relationships between vector time series. The Kalman filter subroutines provide analysis of various time
series and are presented as a tool for dealing with state space models.
The subroutines offer the following functions:
250 F Chapter 13: Time Series Analysis and Examples

 generating univariate, multivariate, and fractional time series

 computing likelihood function of ARMA, VARMA, and ARFIMA models

 computing an autocovariance function of ARMA, VARMA, and ARFIMA models

 checking the stationarity of ARMA and VARMA models

 filtering and smoothing of time series models by using Kalman method

 fitting AR, periodic AR, time-varying coefficient AR, VAR, and ARFIMA models

 handling Bayesian seasonal adjustment model

In addition, decomposition analysis, forecast of an ARMA model, and fractionally differencing of the series
are provided.
This chapter consists of five sections. The first section includes the ARMACOV and ARMALIK subroutines
and ARMASIM function. The second section includes the TSBAYSEA, TSDECOMP, TSMLOCAR, TSM-
LOMAR, TSMULMAR, TSPERARS, TSPRED, TSROOT, TSTVCAR, and TSUNIMAR subroutines. The
third section includes the KALCVF, KALCVS, KALDFF, and KALDFS subroutines. The fourth section
includes the VARMACOV, VARMALIK, VARMASIM, VNORMAL, and VTSROOT subroutines. The last
section includes the FARMACOV, FARMAFIT, FARMALIK, FARMASIM, and FDIF subroutines.

Basic Time Series Subroutines

In classical linear regression analysis, the underlying process often can be represented simply by an intercept
and slope parameters. A time series can be modeled by a type of regression analysis.
The ARMASIM function generates various time series from the underlying AR, MA, and ARMA models.
Simulations of time series with known ARMA structure are often needed as part of other simulations or as
learning data sets for developing time series analysis skills.
The ARMACOV subroutine provides the pattern of the autocovariance function of AR, MA, and ARMA
models and helps to identify and fit a proper model.
The ARMALIK subroutine provides the log likelihood of an ARMA model and helps to obtain estimates of
the parameters of a regression model with innovations having an ARMA structure.
The following subroutines and functions are supported:

ARMACOV computes an autocovariance sequence for an ARMA model.


ARMALIK computes the log likelihood and residuals for an ARMA model.
ARMASIM simulates an ARMA series.

See the examples of the use of ARMACOV and ARMALIK subroutines in Chapter 9.
Getting Started F 251

Getting Started

Consider a time series of length 100 from the ARMA(2,1) model

yt D 0:5yt 1 0:04yt 2 C et C 0:25et 1

where the error series follows a normal distribution with mean 10 and standard deviation 2.
The following statements generate the ARMA(2,1) model, compute 10 lags of its autocovariance functions,
and calculate its log-likelihood function and residuals:

proc iml;
/* ARMA(2,1) model */
phi = {1 -0.5 0.04};
theta = {1 0.25};
mu = 10;
sigma = 2;
nobs = 100;
seed = 3456;
lag = 10;
yt = armasim(phi, theta, mu, sigma, nobs, seed);
call armacov(autocov, cross, convol, phi, theta, lag);
autocov = autocov`;
cross = cross`;
convol = convol`;
lag = (0:lag-1)`;
print autocov cross convol;
call armalik(lnl, resid, std, yt, phi, theta);

resid=resid[1:9];
std=std[1:9];
print lnl resid std;
252 F Chapter 13: Time Series Analysis and Examples

Figure 13.1 Plot of Generated ARMA(2,1) Process (ARMASIM)

The ARMASIM function generates the data shown in Figure 13.1.

Figure 13.2 Autocovariance functions of ARMA(2,1) Model (ARMACOV)

autocov cross convol

1.6972803 1.1875 1.0625


1.0563848 0.25 0.25
0.4603012
0.1878952
0.0755356
0.030252
0.0121046
0.0048422
0.0019369
0.0007748

In Figure 13.2, the ARMACOV subroutine prints the autocovariance functions of the ARMA(2,1) model
and the covariance functions of the moving-average term with lagged values of the process and the autoco-
Syntax F 253

variance functions of the moving-average term.

Figure 13.3 Log-Likelihood Function of ARMA(2,1) Model (ARMALIK)

lnl resid std

-154.9148 5.2779797 1.3027971


22.034073 2.3491607 1.0197
0.5705918 2.3893996 1.0011951
8.4086892 1.0000746
2.200401 1.0000047
5.4127254 1.0000003
6.2756004 1
1.1944693 1
4.9425372 1

The first column in Figure 13.3 shows the log-likelihood function, the estimate of the innovation variance,
and the log of the determinant of the variance matrix. The next two columns are part of the results in the
standardized residuals and the scale factors used to standardize the residuals.

Syntax

CALL ARMACOV (auto, cross, convol, phi, theta, num) ;

CALL ARMALIK (lnl, resid, std, x, phi, theta) ;

ARMASIM (phi, theta, mu, sigma, n, < seed >) ;

Time Series Analysis and Control Subroutines

This section describes an adaptation of parts of the Time Series Analysis and Control (TIMSAC) package
developed by the Institute of Statistical Mathematics (ISM) in Japan.
Selected routines from the TIMSAC package from ISM were converted by SAS Institute staff into SAS/IML
routines under an agreement between SAS Institute and ISM. Credit for authorship of these TIMSAC
SAS/IML routines goes to ISM, which has agreed to make them available to SAS users without charge.
There are four packages of TIMSAC programs. See the section ISM TIMSAC Packages on page 301 for
more information about the TIMSAC package produced by ISM. Since these SAS/IML time series analysis
subroutines are adapted from the corresponding FORTRAN subroutines in the TIMSAC package produced
by ISM, they are collectively referred to as the TIMSAC subroutines in this chapter.
The subroutines analyze and forecast univariate and multivariate time series data. The nonstationary time
series and seasonal adjustment models can also be analyzed by using the Interactive Matrix Language TIM-
254 F Chapter 13: Time Series Analysis and Examples

SAC subroutines. These subroutines contain the Bayesian modeling of seasonal adjustment and changing
spectrum estimation.
Discrete time series modeling has been widely used to analyze dynamic systems in economics, engineering,
and statistics. The Box-Jenkins and Box-Tiao approaches are classical examples of unified time series
analysis through identification, estimation, and forecasting (or control). The ARIMA procedure in the
SAS/ETS product uses these approaches. Bayesian methods are being increasingly applied despite the
controversial issues involved in choosing a prior distribution.
The fundamental idea of the Bayesian method is that uncertainties can be explained by probabilities. If
there is a class model ./ that consists of sets of member models (!), you can describe the uncertainty of
 by using a prior distribution of !. The member model ! is directly related to model parameters. Let the
prior probability density function be p.!/. When you observe the data y that is generated from the model
, the data distribution is described as p.Y j!/ given the unknown ! with a prior probability density p.!/,
where the function p.Y j!/ is the usual likelihood function. Then the posterior distribution is the updated
prior distribution given the sample information. The posterior probability density function is proportional
to observed likelihood function  prior density function.
The TIMSAC subroutines contain various time series analysis and Bayesian models. Most of the subrou-
tines are based on the minimum Akaike information criterion (AIC) or on the minimum Akaike Bayesian
information criterion (ABIC) method to determine the best model among alternative models. The TSBAY-
SEA subroutine is a typical example of Bayesian modeling. The following subroutines are supported:

TSBAYSEA Bayesian seasonal adjustment modeling


TSDECOMP time series decomposition analysis
TSMLOCAR locally stationary univariate AR model fitting
TSMLOMAR locally stationary multivariate AR model fitting
TSMULMAR multivariate AR model fitting
TSPERARS periodic AR model fitting
TSPRED ARMA model forecasting and forecast error variance
TSROOT polynomial roots or ARMA coefficients computation
TSTVCAR time-varying coefficient AR model estimation
TSUNIMAR univariate AR model fitting

For univariate and multivariate autoregressive model estimation, the least squares method is used. The least
squares estimate is an approximate maximum likelihood estimate if error disturbances are assumed to be
Gaussian. The least squares method is performed by using the Householder transformation method. See the
section Least Squares and Householder Transformation on page 295 for details.
The TSUNIMAR and TSMULMAR subroutines estimate the autoregressive models and select the appro-
priate AR order automatically by using the minimum AIC method. The TSMLOCAR and TSMLOMAR
subroutines analyze the nonstationary time series data. The Bayesian time-varying AR coefficient model
(TSTVCAR) offers another nonstationary time series analysis method. The state space and Kalman filter
method is systematically applied to the smoothness priors models (TSDECOMP and TSTVCAR), which
have stochastically perturbed difference equation constraints. The TSBAYSEA subroutine provides a way
of handling Bayesian seasonal adjustment, and it can be an alternative to the SAS/ETS X-11 procedure. The
Getting Started F 255

TSBAYSEA subroutine employs the smoothness priors idea through constrained least squares estimation,
while the TSDECOMP and TSTVCAR subroutines estimate the smoothness tradeoff parameters by using
the state space model and Kalman filter recursive computation. The TSPRED subroutine computes the one-
step or multistep predicted values of the ARMA time series model. In addition, the TSPRED subroutine
computes forecast error variances and impulse response functions. The TSROOT subroutine computes the
AR and MA coefficients given the characteristic roots of the polynomial equation and the characteristic
roots for the AR or MA model.

Getting Started

Minimum AIC Model Selection

The time series model is automatically selected by using the AIC. The TSUNIMAR call estimates the
univariate autoregressive model and computes the AIC. You need to specify the maximum lag or order of the
AR process with the MAXLAG= option or put the maximum lag as the sixth argument of the TSUNIMAR
call. Here is an example:

proc iml;
y = { 2.430 2.506 2.767 2.940 3.169 3.450 3.594 3.774 3.695 3.411
2.718 1.991 2.265 2.446 2.612 3.359 3.429 3.533 3.261 2.612
2.179 1.653 1.832 2.328 2.737 3.014 3.328 3.404 2.981 2.557
2.576 2.352 2.556 2.864 3.214 3.435 3.458 3.326 2.835 2.476
2.373 2.389 2.742 3.210 3.520 3.828 3.628 2.837 2.406 2.675
2.554 2.894 3.202 3.224 3.352 3.154 2.878 2.476 2.303 2.360
2.671 2.867 3.310 3.449 3.646 3.400 2.590 1.863 1.581 1.690
1.771 2.274 2.576 3.111 3.605 3.543 2.769 2.021 2.185 2.588
2.880 3.115 3.540 3.845 3.800 3.579 3.264 2.538 2.582 2.907
3.142 3.433 3.580 3.490 3.475 3.579 2.829 1.909 1.903 2.033
2.360 2.601 3.054 3.386 3.553 3.468 3.187 2.723 2.686 2.821
3.000 3.201 3.424 3.531 };
call tsunimar(arcoef,ev,nar,aic) data=y opt={-1 1}
print=1 maxlag=20;

You can also invoke the TSUNIMAR subroutine as follows:

call tsunimar(arcoef,ev,nar,aic,y,20,{-1 1},,1);

The optional arguments can be omitted. In this example, the argument MISSING is omitted, and thus the
default option (MISSING=0) is used. The summary table of the minimum AIC method is displayed in
Figure 13.4 and Figure 13.5. The final estimates are given in Figure 13.6.
256 F Chapter 13: Time Series Analysis and Examples

Figure 13.4 Minimum AIC Table - I

ORDER INNOVATION VARIANCE


M V(M) AIC(M)
0 0.31607294 -108.26753229
1 0.11481982 -201.45277331
2 0.04847420 -280.51201122
3 0.04828185 -278.88576251
4 0.04656506 -280.28905616
5 0.04615922 -279.11190502
6 0.04511943 -279.25356641
7 0.04312403 -281.50543541
8 0.04201118 -281.96304075
9 0.04128036 -281.61262868
10 0.03829179 -286.67686828
11 0.03318558 -298.13013264
12 0.03255171 -297.94298716
13 0.03247784 -296.15655602
14 0.03237083 -294.46677874
15 0.03234790 -292.53337704
16 0.03187416 -291.92021487
17 0.03183282 -290.04220196
18 0.03126946 -289.72064823
19 0.03087893 -288.90203735
20 0.02998019 -289.67854830

Figure 13.5 Minimum AIC Table - II

AIC(M)-AICMIN (truncated at 40.0)


0 10 20 30 40
M AIC(M)-AICMIN +---------+---------+---------+---------+
0 189.862600 | .
1 96.677359 | .
2 17.618121 | * |
3 19.244370 | * |
4 17.841076 | * |
5 19.018228 | * |
6 18.876566 | * |
7 16.624697 | * |
8 16.167092 | * |
9 16.517504 | * |
10 11.453264 | * |
11 0 * |
12 0.187145 * |
13 1.973577 | * |
14 3.663354 | * |
15 5.596756 | * |
16 6.209918 | * |
17 8.087931 | * |
18 8.409484 | * |
19 9.228095 | * |
20 8.451584 | * |
+---------+---------+---------+---------+

***** MINIMUM AIC = -298.130133 ATTAINED AT M = 11 *****


Getting Started F 257

The minimum AIC order is selected as 11. Then the coefficients are estimated as in Figure 13.6. Note that
the first 20 observations are used as presample values.

Figure 13.6 Minimum AIC Estimation

..........................M A I C E.........................
. .
. .
. .
. M AR Coefficients: AR(M) .
. .
. 1 1.181322 .
. 2 -0.551571 .
. 3 0.231372 .
. 4 -0.178040 .
. 5 0.019874 .
. 6 -0.062573 .
. 7 0.028569 .
. 8 -0.050710 .
. 9 0.199896 .
. 10 0.161819 .
. 11 -0.339086 .
. .
. .
. AIC = -298.1301326 .
. Innovation Variance = 0.033186 .
. .
. .
. INPUT DATA START = 21 FINISH = 114 .
................................................................

You can estimate the AR(11) model directly by specifying OPT=f 1 0g and using the first 11 observations
as presample values. The AR(11) estimates shown in Figure 13.7 are different from the minimum AIC
estimates in Figure 13.6 because the samples are slightly different. Here is the code:

call tsunimar(arcoef,ev,nar,aic,y,11,{-1 0},,1);


258 F Chapter 13: Time Series Analysis and Examples

Figure 13.7 AR(11) Estimation

..........................M A I C E.........................
. .
. .
. .
. M AR Coefficients: AR(M) .
. .
. 1 1.149416 .
. 2 -0.533719 .
. 3 0.276312 .
. 4 -0.326420 .
. 5 0.169336 .
. 6 -0.164108 .
. 7 0.073123 .
. 8 -0.030428 .
. 9 0.151227 .
. 10 0.192808 .
. 11 -0.340200 .
. .
. .
. AIC = -318.7984105 .
. Innovation Variance = 0.036563 .
. .
. .
. INPUT DATA START = 12 FINISH = 114 .
................................................................

The minimum AIC procedure can also be applied to the vector autoregressive (VAR) model by using the
TSMULMAR subroutine. See the section Multivariate Time Series Analysis on page 291 for details.
Three variables are used as input. The maximum lag is specified as 10. Here is the code:

data one;
input invest income consum @@;
datalines;
. . . data lines omitted . . .
;
proc iml;
use one;
read all into y var{invest income consum};
mdel = 1;
maice = 2;
misw = 0; /* instantaneous modeling ? */
opt = mdel || maice || misw;
maxlag = 10;
miss = 0;
print = 1;
call tsmulmar(arcoef,ev,nar,aic,y,maxlag,opt,miss,print);

The VAR(3) model minimizes the AIC and was selected as an appropriate model (see Figure 13.8). How-
ever, AICs of the VAR(4) and VAR(5) models show little difference from VAR(3). You can also choose
Getting Started F 259

VAR(4) or VAR(5) as an appropriate model in the context of minimum AIC since this AIC difference is
much less than 1.
Figure 13.8 VAR Model Selection

ORDER INNOVATION VARIANCE


M LOG(|V(M)|) AIC(M)
0 25.98001095 2136.36089828
1 15.70406486 1311.73331883
2 15.48896746 1312.09533158
3 15.18567834 1305.22562428
4 14.96865183 1305.42944974
5 14.74838535 1305.36759889
6 14.60269347 1311.42086432
7 14.54981887 1325.08514729
8 14.38596333 1329.64899297
9 14.16383772 1329.43469312
10 13.85377849 1322.00983656

AIC(M)-AICMIN (truncated at 40.0)


0 10 20 30 40
M AIC(M)-AICMIN +---------+---------+---------+---------+
0 831.135274 | .
1 6.507695 | * |
2 6.869707 | * |
3 0 * |
4 0.203825 * |
5 0.141975 * |
6 6.195240 | * |
7 19.859523 | * |
8 24.423369 | * |
9 24.209069 | * |
10 16.784212 | * |
+---------+---------+---------+---------+

The TSMULMAR subroutine estimates the instantaneous response model with diagonal error variance.
See the section Multivariate Time Series Analysis on page 291 for details on the instantaneous response
model. Therefore, it is possible to select the minimum AIC model independently for each equation. The
best model is selected by specifying MAXLAG=5, as in the following code:

call tsmulmar(arcoef,ev,nar,aic) data=y maxlag=5


opt={1 1 0} print=1;

Figure 13.9 Model Selection via Instantaneous Response Model

variance

256.64375 29.803549 76.846777


29.803549 228.97341 119.60387
76.846777 119.60387 134.21764
260 F Chapter 13: Time Series Analysis and Examples

Figure 13.9 continued

arcoef

13.312109 1.5459098 15.963897


0.8257397 0.2514803 0
0.0958916 1.0057088 0
0.0320985 0.3544346 0.4698934
0.044719 -0.201035 0
0.0051931 -0.023346 0
0.1169858 -0.060196 0.0483318
0.1867829 0 0
0.0216907 0 0
-0.117786 0 0.3500366
0.1541108 0 0
0.0178966 0 0
0.0461454 0 -0.191437
-0.389644 0 0
-0.045249 0 0
-0.116671 0 0

aic

1347.6198

You can print the intermediate results of the minimum AIC procedure by using the PRINT=2 option.
Note that the AIC value depends on the MAXLAG=lag option and the number of parameters estimated.
The minimum AIC VAR estimation procedure (MAICE=2) uses the following AIC formula:

.T O C 2.p  n2 C n  intercept/
lag/ log.jj/

In this formula, p is the order of the n-variate VAR process, and intercept=1 if the intercept is specified;
otherwise, intercept=0. When you use the MAICE=1 or MAICE=0 option, AIC is computed as the sum of
AIC for each response equation. Therefore, there is an AIC difference of n.n 1/ since the instantaneous
response model contains the additional n.n 1/=2 response variables as regressors.
The following code estimates the instantaneous response model. The results are shown in Figure 13.10.

call tsmulmar(arcoef,ev,nar,aic) data=y


maxlag=3 opt={1 0 0};
print aic nar;
print arcoef;

Figure 13.10 AIC from Instantaneous Response Model

aic nar

1403.0762 3
Getting Started F 261

Figure 13.10 continued

arcoef

4.8245814 5.3559216 17.066894


0.8855926 0.3401741 -0.014398
0.1684523 1.0502619 0.107064
0.0891034 0.4591573 0.4473672
-0.059195 -0.298777 0.1629818
0.1128625 -0.044039 -0.088186
0.1684932 -0.025847 -0.025671
0.0637227 -0.196504 0.0695746
-0.226559 0.0532467 -0.099808
-0.303697 -0.139022 0.2576405

The following code estimates the VAR model. The results are shown in Figure 13.11.

call tsmulmar(arcoef,ev,nar,aic) data=y maxlag=3


opt={1 2 0};
print aic nar;
print arcoef;

Figure 13.11 AIC from VAR Model

aic nar

1397.0762 3

arcoef

4.8245814 5.3559216 17.066894


0.8855926 0.3401741 -0.014398
0.1684523 1.0502619 0.107064
0.0891034 0.4591573 0.4473672
-0.059195 -0.298777 0.1629818
0.1128625 -0.044039 -0.088186
0.1684932 -0.025847 -0.025671
0.0637227 -0.196504 0.0695746
-0.226559 0.0532467 -0.099808
-0.303697 -0.139022 0.2576405

The AIC computed from the instantaneous response model is greater than that obtained from the VAR model
estimation by 6. There is a discrepancy between Figure 13.11 and Figure 13.8 because different observations
are used for estimation.

Nonstationary Data Analysis

The following example shows how to manage nonstationary data by using TIMSAC calls. In practice, time
series are considered to be stationary when the expected values of first and second moments of the series
262 F Chapter 13: Time Series Analysis and Examples

do not change over time. This weak or covariance stationarity can be modeled by using the TSMLOCAR,
TSMLOMAR, TSDECOMP, and TSTVCAR subroutines.
First, the locally stationary model is estimated. The whole series (1000 observations) is divided into three
blocks of size 300 and one block of size 90, and the minimum AIC procedure is applied to each block of the
data set. See the section Nonstationary Time Series on page 287 for more details. Here is the code:

data one;
input y @@;
datalines;
. . . data lines omitted . . .
;

proc iml;
use one;
read all var{y};

mdel = -1;
lspan = 300; /* local span of data */
maice = 1;
opt = mdel || lspan || maice;
call tsmlocar(arcoef,ev,nar,aic,first,last)
data=y maxlag=10 opt=opt print=2;

Estimation results are displayed with the graphs of power spectrum .log 10.fY Y .g///, where fY Y .g/ is a
rational spectral density function. See the section Spectral Analysis on page 292. The estimates for the
first block and third block are shown in Figure 13.12 and Figure 13.15, respectively. As the first block and the
second block do not have any sizable difference, the pooled model (AIC=45.892) is selected instead of the
moving model (AIC=46.957) in Figure 13.13. However, you can notice a slight change in the shape of the
spectrum of the third block of the data (observations 611 through 910). See Figure 13.14 and Figure 13.16
for comparison. The moving model is selected since the AIC (106.830) of the moving model is smaller than
that of the pooled model (108.867).
Getting Started F 263

Figure 13.12 Locally Stationary Model for First Block

INITIAL LOCAL MODEL: N_CURR = 300


NAR_CURR = 8 AIC = 37.583203

..........................CURRENT MODEL.........................
. .
. .
. .
. M AR Coefficients: AR(M) .
. .
. 1 1.605717 .
. 2 -1.245350 .
. 3 1.014847 .
. 4 -0.931554 .
. 5 0.394230 .
. 6 -0.004344 .
. 7 0.111608 .
. 8 -0.124992 .
. .
. .
. AIC = 37.5832030 .
. Innovation Variance = 1.067455 .
. .
. .
. INPUT DATA START = 11 FINISH = 310 .
................................................................
264 F Chapter 13: Time Series Analysis and Examples

Figure 13.13 Locally Stationary Model Comparison

--- THE FOLLOWING TWO MODELS ARE COMPARED ---

MOVING MODEL: (N_PREV = 300, N_CURR = 300)


NAR_CURR = 7 AIC = 46.957398
CONSTANT MODEL: N_POOLED = 600
NAR_POOLED = 8 AIC = 45.892350

***** CONSTANT MODEL ADOPTED *****

..........................CURRENT MODEL.........................
. .
. .
. .
. M AR Coefficients: AR(M) .
. .
. 1 1.593890 .
. 2 -1.262379 .
. 3 1.013733 .
. 4 -0.926052 .
. 5 0.314480 .
. 6 0.193973 .
. 7 -0.058043 .
. 8 -0.078508 .
. .
. .
. AIC = 45.8923501 .
. Innovation Variance = 1.047585 .
. .
. .
. INPUT DATA START = 11 FINISH = 610 .
................................................................
Getting Started F 265

Figure 13.14 Power Spectrum for First and Second Blocks

POWER SPECTRAL DENSITY


20.00+
|
|
|
|
| XXXX
XXX XX XXX
| XXXX
| X
|
10.00+
| X
|
| X
|
| X XX
| X
| X X
|
| X X X
0+ X
| X X X
| XX XX
| XXXX X
|
| X
| X
|
| X
| X
-10.0+ X
| XX
| XX
| XX
| XXX
| XXXXXX
|
|
|
|
-20.0+-----------+-----------+-----------+-----------+-----------+
0.0 0.1 0.2 0.3 0.4 0.5

FREQUENCY
266 F Chapter 13: Time Series Analysis and Examples

Figure 13.15 Locally Stationary Model for Third Block

--- THE FOLLOWING TWO MODELS ARE COMPARED ---

MOVING MODEL: (N_PREV = 600, N_CURR = 300)


NAR_CURR = 7 AIC = 106.829869
CONSTANT MODEL: N_POOLED = 900
NAR_POOLED = 8 AIC = 108.867091

*************************************
***** *****
***** NEW MODEL ADOPTED *****
***** *****
*************************************

..........................CURRENT MODEL.........................
. .
. .
. .
. M AR Coefficients: AR(M) .
. .
. 1 1.648544 .
. 2 -1.201812 .
. 3 0.674933 .
. 4 -0.567576 .
. 5 -0.018924 .
. 6 0.516627 .
. 7 -0.283410 .
. .
. .
. AIC = 60.9375188 .
. Innovation Variance = 1.161592 .
. .
. .
. INPUT DATA START = 611 FINISH = 910 .
................................................................
Getting Started F 267

Figure 13.16 Power Spectrum for Third Block

POWER SPECTRAL DENSITY


20.00+ X
| X
| X
| X
| XXX
| XXXXX
| XX
XX X
|
|
10.00+ X
|
|
| X
|
| X
| X
| X X
| X
| X X
0+ X X X
| X
| X XX X
| XXXXXX
| X
|
| X
|
| X
| X
-10.0+ X
| XX
| XX XXXXX
| XXXXXXX
|
|
|
|
|
|
-20.0+-----------+-----------+-----------+-----------+-----------+
0.0 0.1 0.2 0.3 0.4 0.5

FREQUENCY

Finally, the moving model is selected since there is a structural change in the last block of data (observations
911 through 1000). The final estimates are stored in variables ARCOEF, EV, NAR, AIC, FIRST, and LAST.
The final estimates and spectrum are given in Figure 13.17 and Figure 13.18, respectively. The power
spectrum of the final model (Figure 13.18) is significantly different from that of the first and second blocks
(see Figure 13.14).
268 F Chapter 13: Time Series Analysis and Examples

Figure 13.17 Locally Stationary Model for Last Block

--- THE FOLLOWING TWO MODELS ARE COMPARED ---

MOVING MODEL: (N_PREV = 300, N_CURR = 90)


NAR_CURR = 6 AIC = 139.579012
CONSTANT MODEL: N_POOLED = 390
NAR_POOLED = 9 AIC = 167.783711

*************************************
***** *****
***** NEW MODEL ADOPTED *****
***** *****
*************************************

..........................CURRENT MODEL.........................
. .
. .
. .
. M AR Coefficients: AR(M) .
. .
. 1 1.181022 .
. 2 -0.321178 .
. 3 -0.113001 .
. 4 -0.137846 .
. 5 -0.141799 .
. 6 0.260728 .
. .
. .
. AIC = 78.6414932 .
. Innovation Variance = 2.050818 .
. .
. .
. INPUT DATA START = 911 FINISH = 1000 .
................................................................
Getting Started F 269

Figure 13.18 Power Spectrum for Last Block

POWER SPECTRAL DENSITY


30.00+
|
|
|
|
| X
|
| X
|
|
20.00+ X
|
|
| X X
|
| X
XXX X
| XXXXX X
|
|
10.00+ X
|
| X
|
| X
|
| X
| X
| X
| XX
0+ XX XXXXXX
| XXXXXX XX
| XX
| XX XX
| XX XXXX
| XXXXXXXXX
|
|
|
|
-10.0+-----------+-----------+-----------+-----------+-----------+
0.0 0.1 0.2 0.3 0.4 0.5

FREQUENCY

The multivariate analysis for locally stationary data is a straightforward extension of the univariate analysis.
The bivariate locally stationary VAR models are estimated. The selected model is the VAR(7) process
with some zero coefficients over the last block of data. There seems to be a structural difference between
observations from 11 to 610 and those from 611 to 896. Here is the code:

proc iml;
270 F Chapter 13: Time Series Analysis and Examples

rudder = {. . . data lines omitted . . .};


yawing = {. . . data lines omitted . . .};

y = rudder` || yawing`;
c = {0.01795 0.02419};
*-- calibration of data --*/
y = y # (c @ j(n,1,1));
mdel = -1;
lspan = 300; /* local span of data */
maice = 1;
call tsmlomar(arcoef,ev,nar,aic,first,last) data=y maxlag=10
opt=(mdel || lspan || maice) print=1;

The results of the analysis are shown in Figure 13.19.


Getting Started F 271

Figure 13.19 Locally Stationary VAR Model Analysis

--- THE FOLLOWING TWO MODELS ARE COMPARED ---

MOVING MODEL: (N_PREV = 600, N_CURR = 286)


NAR_CURR = 7 AIC = -823.845234
CONSTANT MODEL: N_POOLED = 886
NAR_POOLED = 10 AIC = -716.818588

*************************************
***** *****
***** NEW MODEL ADOPTED *****
***** *****
*************************************

..........................CURRENT MODEL.........................
. .
. .
. .
. M AR Coefficients .
. .
. 1 0.932904 -0.130964 .
. -0.024401 0.599483 .
. 2 0.163141 0.266876 .
. -0.135605 0.377923 .
. 3 -0.322283 0.178194 .
. 0.188603 -0.081245 .
. 4 0.166094 -0.304755 .
. -0.084626 -0.180638 .
. 5 0 0 .
. 0 -0.036958 .
. 6 0 0 .
. 0 0.034578 .
. 7 0 0 .
. 0 0.268414 .
. .
. .
. AIC = -114.6911872 .
. .
. Innovation Variance .
. .
. 1.069929 0.145558 .
. 0.145558 0.563985 .
. .
. .
. INPUT DATA START = 611 FINISH = 896 .
................................................................

Consider the time series decomposition


yt D Tt C St C ut C t
where Tt and St are trend and seasonal components, respectively, and ut is a stationary AR(p) process. The
annual real GNP series is analyzed under second difference stochastic constraints on the trend component
272 F Chapter 13: Time Series Analysis and Examples

and the stationary AR(2) process.

Tt D 2Tt 1 Tt 2 C w1t
ut D 1 ut 1 C 2 ut 2 C w2t

The seasonal component is ignored if you specify SORDER=0. Therefore, the following state space model
is estimated:

yt D Hzt C t
zt D Fzt 1 C wt

where
 
H D 1 0 1 0
2 3
2 1 0 0
6 1 0 0 0 7
F D 6
4 0
7
0 1 2 5
0 0 1 0
0
zt D .Tt ; Tt 1 ; ut ; ut 1 /

12
0 2 31
0 0 0
B 6 0 0 0 0 7
D .w1t ; 0; w2t ; 0/0  B
C
wt @0; 4 0 0 22
6 7C
0 5A
0 0 0 0

The parameters of this state space model are 12 , 22 , 1 , and 2 . Here is the code:

proc iml;
y = { 116.8 120.1 123.2 130.2 131.4 125.6 124.5 134.3
135.2 151.8 146.4 139.0 127.8 147.0 165.9 165.5
179.4 190.0 189.8 190.9 203.6 183.5 169.3 144.2
141.5 154.3 169.5 193.0 203.2 192.9 209.4 227.2
263.7 297.8 337.1 361.3 355.2 312.6 309.9 323.7
324.1 355.3 383.4 395.1 412.8 406.0 438.0 446.1
452.5 447.3 475.9 487.7 497.2 529.8 551.0 581.1
617.8 658.1 675.2 706.6 724.7 };
y = y`; /*-- convert to column vector --*/
mdel = 0;
trade = 0;
tvreg = 0;
year = 0;
period= 0;
log = 0;
maxit = 100;
update = .; /* use default update method */
line = .; /* use default line search method */
sigmax = 0; /* no upper bound for variances */
Getting Started F 273

back = 100;
opt = mdel || trade || year || period || log || maxit ||
update || line || sigmax || back;
call tsdecomp(cmp,coef,aic) data=y order=2 sorder=0 nar=2
npred=5 opt=opt icmp={1 3} print=1;
y = y[52:61];
cmp = cmp[52:66,];
print y cmp;

The estimated parameters are printed when you specify the PRINT= option. In Figure 13.20, the estimated
variances are printed under the title of TAU2(I), showing that O 12 D 2:915 and O 22 D 113:9577. The AR
coefficient estimates are O 1 D 1:397 and O 2 D 0:595. These estimates are also stored in the output matrix
COEF.
Figure 13.20 Nonstationary Time Series and State Space Modeling

<<< Final Estimates >>>

--- PARAMETER VECTOR ---

1.607423E-01 6.283820E+00 8.761627E-01 -5.94879E-01

--- GRADIENT ---

3.352158E-04 5.237221E-06 2.907539E-04 -1.24376E-04

LIKELIHOOD = -249.937193 SIG2 = 18.135085


AIC = 509.874385

I TAU2(I) AR(I) PARCOR(I)


1 2.915075 1.397374 0.876163
2 113.957607 -0.594879 -0.594879

The trend and stationary AR components are estimated by using the smoothing method, and out-of-sample
forecasts are computed by using a Kalman filter prediction algorithm. The trend and AR components are
stored in the matrix CMP since the ICMP={1 3} option is specified. The last 10 observations of the original
series Y and the last 15 observations of two components are shown in Figure 13.21. Note that the first
column of CMP is the trend component and the second column is the AR component. The last 5 observations
of the CMP matrix are out-of-sample forecasts.
274 F Chapter 13: Time Series Analysis and Examples

Figure 13.21 Smoothed and Predicted Values of Two Components

y cmp

487.7 514.01141 -26.94342


497.2 532.62744 -32.48672
529.8 552.02402 -24.46593
551 571.90121 -20.15112
581.1 592.31944 -10.58646
617.8 613.21855 5.2504401
658.1 634.43665 20.799207
675.2 655.70431 22.161604
706.6 677.2125 27.927978
724.7 698.72364 25.957962
720.23478 19.6592
741.74593 12.029396
763.25707 5.1147111
784.76821 -0.008876
806.27935 -3.05504

Seasonal Adjustment

Consider the simple time series decomposition

yt D Tt C St C t

The TSBAYSEA subroutine computes seasonally adjusted series by estimating the seasonal component.
The seasonally adjusted series is computed as yt D yt SOt . The details of the adjustment procedure are
given in the section Bayesian Seasonal Adjustment on page 285.
The monthly labor force series (19721978) are analyzed. You do not need to specify the options vector if
you want to use the default options. However, you should change OPT[2] when the data frequency is not
monthly (OPT[2]=12). The NPRED= option produces the multistep forecasts for the trend and seasonal
components. The stochastic constraints are specified as ORDER=2 and SORDER=1.

Tt D 2Tt 1 Tt 2 C w1t
St D St 1  St 11 C w2t

In Figure 13.22, the first column shows the trend components; the second column shows the seasonal com-
ponents; the third column shows the forecasts; the fourth column shows the seasonally adjusted series; the
last column shows the value of ABIC. The last 12 rows are the forecasts. The figure is generated by using
the following statements:

proc iml;
y = { 5447 5412 5215 4697 4344 5426
5173 4857 4658 4470 4268 4116
4675 4845 4512 4174 3799 4847
4550 4208 4165 3763 4056 4058
5008 5140 4755 4301 4144 5380
Getting Started F 275

5260 4885 5202 5044 5685 6106


8180 8309 8359 7820 7623 8569
8209 7696 7522 7244 7231 7195
8174 8033 7525 6890 6304 7655
7577 7322 7026 6833 7095 7022
7848 8109 7556 6568 6151 7453
6941 6757 6437 6221 6346 5880 };
y = y`;

call tsbaysea(trend,season,series,adj,abic)
data=y order=2 sorder=1 npred=12 print=2;
print trend season series adj abic;

Figure 13.22 Trend and Seasonal Component Estimates and Forecasts

obs trend season series adj abic

1 4843.2502 576.86675 5420.1169 4870.1332 874.04585


2 4848.6664 612.79607 5461.4624 4799.2039
3 4871.2876 324.02004 5195.3077 4890.98
4 4896.6633 -198.7601 4697.9032 4895.7601
5 4922.9458 -572.5562 4350.3896 4916.5562
. . . . .
71 6551.6017 -266.2162 6285.3855 6612.2162
72 6388.9012 -440.3472 5948.5539 6320.3472
73 6226.2006 650.7707 6876.9713
74 6063.5001 800.93733 6864.4374
75 5900.7995 396.19866 6296.9982
76 5738.099 -340.2852 5397.8137
77 5575.3984 -719.1146 4856.2838
78 5412.6979 553.19764 5965.8955
79 5249.9973 202.06582 5452.0631
80 5087.2968 -54.44768 5032.8491
81 4924.5962 -295.2747 4629.3215
82 4761.8957 -487.6621 4274.2336
83 4599.1951 -266.1917 4333.0034
84 4436.4946 -440.3354 3996.1591

The estimated spectral density function of the irregular series Ot is shown in Figure 13.23 and Figure 13.24.
276 F Chapter 13: Time Series Analysis and Examples

Figure 13.23 Spectrum of Irregular Component

I Rational 0.0 10.0 20.0 30.0 40.0 50.0 60.0


Spectrum +---------+---------+---------+---------+---------+---------+
0 1.366798E+00 |* ===>X
1 1.571261E+00 |*
2 2.414836E+00 | *
3 5.151906E+00 | *
4 1.634887E+01 | *
5 8.085674E+01 | *
6 3.805530E+02 | *
7 8.082536E+02 | *
8 6.366350E+02 | *
9 3.479435E+02 | *
10 3.872650E+02 | * ===>X
11 1.264805E+03 | *
12 1.726138E+04 | *
13 1.559041E+03 | *
14 1.276516E+03 | *
15 3.861089E+03 | *
16 9.593184E+03 | *
17 3.662145E+03 | *
18 5.499783E+03 | *
19 4.443303E+03 | *
20 1.238135E+03 | * ===>X
21 8.392131E+02 | *
22 1.258933E+03 | *
23 2.932003E+03 | *
24 1.857923E+03 | *
25 1.171437E+03 | *
26 1.611958E+03 | *
27 4.822498E+03 | *
28 4.464961E+03 | *
29 1.951547E+03 | *
30 1.653182E+03 | * ===>X
31 2.308152E+03 | *
32 5.475758E+03 | *
33 2.349584E+04 | *
34 5.266969E+03 | *
35 2.058667E+03 | *
36 2.215595E+03 | *
37 8.181540E+03 | *
38 3.077329E+03 | *
39 7.577961E+02 | *
40 5.057636E+02 | * ===>X
41 7.312090E+02 | *
42 3.131377E+03 | * ===>T
43 8.173276E+03 | *
44 1.958359E+03 | *
45 2.216458E+03 | *
46 4.215465E+03 | *
47 9.659340E+02 | *
48 3.758466E+02 | *
49 2.849326E+02 | *
50 3.617848E+02 | * ===>X
51 7.659839E+02 | *
52 3.191969E+03 | *
Getting Started F 277

Figure 13.24 continued

53 1.768107E+04 | *
54 5.281385E+03 | *
55 2.959704E+03 | *
56 3.783522E+03 | *
57 1.896625E+04 | *
58 1.041753E+04 | *
59 2.038940E+03 | *
60 1.347568E+03 | * ===>X

X: If peaks (troughs) appear


at these frequencies,
try lower (higher) values
of rigid and watch ABIC
T: If a peaks appears here
try trading-day adjustment

Miscellaneous Time Series Analysis Tools

The forecast values of multivariate time series are computed by using the TSPRED call. In the following
example, the multistep-ahead forecasts are produced from the VARMA(2,1) estimates. Since the VARMA
model is estimated by using the mean deleted series, you should specify the CONSTANT= 1 option. You
need to provide the original series instead of the mean deleted series to get the correct predictions. The
forecast variance MSE and the impulse response function IMPULSE are also produced.
The VARMA(p; q) model is written
p
X q
X
yt C Ai yt i D t C Mi t i
i D1 iD1

Then the COEF matrix is constructed by stacking matrices A1 ; : : : ; Ap ; M1 ; : : : ; Mq . Here is the code:

proc iml;
c = { 264 235 239 239 275 277 274 334 334 306
308 309 295 271 277 221 223 227 215 223
241 250 270 303 311 307 322 335 335 334
309 262 228 191 188 215 215 249 291 296 };
f = { 690 690 688 690 694 702 702 702 700 702
702 694 708 702 702 708 700 700 702 694
698 694 700 702 700 702 708 708 710 704
704 700 700 694 702 694 710 710 710 708 };
t = { 1152 1288 1288 1288 1368 1456 1656 1496 1744 1464
1560 1376 1336 1336 1296 1296 1280 1264 1280 1272
1344 1328 1352 1480 1472 1600 1512 1456 1368 1280
1224 1112 1112 1048 1176 1064 1168 1280 1336 1248 };
p = { 254.14 253.12 251.85 250.41 249.09 249.19 249.52 250.19
248.74 248.41 249.95 250.64 250.87 250.94 250.96 251.33
251.18 251.05 251.00 250.99 250.79 250.44 250.12 250.19
249.77 250.27 250.74 250.90 252.21 253.68 254.47 254.80
254.92 254.96 254.96 254.96 254.96 254.54 253.21 252.08 };
278 F Chapter 13: Time Series Analysis and Examples

y = c` || f` || t` || p`;
ar = { .82028 -.97167 .079386 -5.4382,
-.39983 .94448 .027938 -1.7477,
-.42278 -2.3314 1.4682 -70.996,
.031038 -.019231 -.0004904 1.3677,
-.029811 .89262 -.047579 4.7873,
.31476 .0061959 -.012221 1.4921,
.3813 2.7182 -.52993 67.711,
-.020818 .01764 .00037981 -.38154 };
ma = { .083035 -1.0509 .055898 -3.9778,
-.40452 .36876 .026369 -.81146,
.062379 -2.6506 .80784 -76.952,
.03273 -.031555 -.00019776 -.025205 };
coef = ar // ma;
ev = { 188.55 6.8082 42.385 .042942,
6.8082 32.169 37.995 -.062341,
42.385 37.995 5138.8 -.10757,
.042942 -.062341 -.10757 .34313 };

nar = 2; nma = 1;
call tspred(forecast,impulse,mse,y,coef,nar,nma,ev,
5,nrow(y),-1);
Getting Started F 279

Figure 13.25 Multivariate ARMA Prediction

observed predicted
Y1 Y2 P1 P2

264 690 269.950 700.750


235 690 256.764 691.925
239 688 239.996 693.467
239 690 242.320 690.951
275 694 247.169 693.214
277 702 279.024 696.157
274 702 284.041 700.449
334 702 286.890 701.580
334 700 321.798 699.851
306 702 330.355 702.383
308 702 297.239 700.421
309 694 302.651 701.928
295 708 294.570 696.261
271 702 283.254 703.936
277 702 269.600 703.110
221 708 270.349 701.557
223 700 231.523 705.438
227 700 233.856 701.785
215 702 234.883 700.185
223 694 229.156 701.837
241 698 235.054 697.060
250 694 249.288 698.181
270 700 257.644 696.665
303 702 272.549 699.281
311 700 301.947 701.667
307 702 306.422 700.708
322 708 304.120 701.204
335 708 311.590 704.654
335 710 320.570 706.389
334 704 315.127 706.439
309 704 311.083 703.735
262 700 288.159 702.801
228 700 251.352 700.805
191 694 226.749 700.247
188 702 199.775 696.570
215 694 202.305 700.242
215 710 222.951 696.451
249 710 226.553 704.483
291 710 259.927 707.610
296 708 291.446 707.861
293.899 707.430
293.477 706.933
292.564 706.190
290.313 705.384
286.559 704.618

The first 40 forecasts in Figure 13.25 are one-step predictions. The last observation is the five-step forecast
values of variables C and F. You can construct the confidence interval for these forecasts by using the mean
square error matrix, MSE. See the section Multivariate Time Series Analysis on page 291 for more details
about impulse response functions and the mean square error matrix.
280 F Chapter 13: Time Series Analysis and Examples

The TSROOT call computes the polynomial roots of the AR and MA equations. When the AR(p) process
is written
p
X
yt D i yt i C t
i D1

you can specify the following polynomial equation:


p
X
zp i z p i
D0
i D1

When all p roots of the preceding equation are inside the unit circle, the AR(p) process is stationary. The
MA(q) process is invertible if the following polynomial equation has all roots inside the unit circle:
q
X
zq C i z q i
D0
i D1

where i are the MA coefficients. For example, the best AR model is selected and estimated by the TSUNI-
MAR subroutine (see Figure 13.26). You can obtain the roots of the preceding equation by calling the
TSROOT subroutine. Since the TSROOT subroutine can handle the complex AR or MA coefficients, note
that you should add zero imaginary coefficients for the second column of the MATIN matrix for real coeffi-
cients. Here is the code:

proc iml;
y = { 2.430 2.506 2.767 2.940 3.169 3.450 3.594 3.774 3.695 3.411
2.718 1.991 2.265 2.446 2.612 3.359 3.429 3.533 3.261 2.612
2.179 1.653 1.832 2.328 2.737 3.014 3.328 3.404 2.981 2.557
2.576 2.352 2.556 2.864 3.214 3.435 3.458 3.326 2.835 2.476
2.373 2.389 2.742 3.210 3.520 3.828 3.628 2.837 2.406 2.675
2.554 2.894 3.202 3.224 3.352 3.154 2.878 2.476 2.303 2.360
2.671 2.867 3.310 3.449 3.646 3.400 2.590 1.863 1.581 1.690
1.771 2.274 2.576 3.111 3.605 3.543 2.769 2.021 2.185 2.588
2.880 3.115 3.540 3.845 3.800 3.579 3.264 2.538 2.582 2.907
3.142 3.433 3.580 3.490 3.475 3.579 2.829 1.909 1.903 2.033
2.360 2.601 3.054 3.386 3.553 3.468 3.187 2.723 2.686 2.821
3.000 3.201 3.424 3.531 };

call tsunimar(ar,v,nar,aic) data=y maxlag=5


opt=({-1 1}) print=1;
/*-- set up complex coefficient matrix --*/
ar_cx = ar || j(nrow(ar),1,0);
call tsroot(root) matin=ar_cx nar=nar nma=0 print=1;

In Figure 13.27, the roots and their lengths from the origin are shown. The roots are also stored in the matrix
ROOT. All roots are within the unit circle, while the MOD values of the fourth and fifth roots appear to be
sizable (0.9194).
Getting Started F 281

Figure 13.26 Minimum AIC AR Estimation

lag ar_coef

1 1.3003068
2 -0.72328
3 0.2421928
4 -0.378757
5 0.1377273
aic innovation_varinace

-318.6138 0.0490554

Figure 13.27 Roots of AR Characteristic Polynomial Equation

Roots of AR Characteristic Polynomial

I Real Imaginary MOD(Z) ATAN(I/R) Degr

1 -0.29755 0.55991 0.6341 2.0593 117.98


2 -0.29755 -0.55991 0.6341 -2.0593 -117.98
3 0.40529 0 0.4053 0
4 0.74505 0.53866 0.9194 0.6260 35.86
5 0.74505 -0.53866 0.9194 -0.6260 -35.86

Z**5-AR(1)*Z**4-AR(2)*Z**3-AR(3)*Z**2-AR(4)*Z**1-AR(5)=0

The TSROOT subroutine can also recover the polynomial coefficients if the roots are given as an input.
You should specify the QCOEF=1 option when you want to compute the polynomial coefficients instead of
polynomial roots. You can compare the result with the preceding output of the TSUNIMAR call. Here is
the code:

call tsroot(ar_cx) matin=root nar=nar qcoef=1


nma=0 print=1;

The results are shown in Figure 13.28.

Figure 13.28 Polynomial Coefficients

Polynomial Coefficents

I AR(real) AR(imag)

1 1.30031 0
2 -0.72328 1.11022E-16
3 0.24219 8.32667E-17
4 -0.37876 2.77556E-17
5 0.13773 0
282 F Chapter 13: Time Series Analysis and Examples

Syntax

TIMSAC routines are controlled by the following statements:


CALL TSBAYSEA (trend, season, series, adjust, abic, data < ,order, sorder, rigid, npred, opt, cntl,
print >) ;

CALL TSDECOMP (comp, est, aic, data < ,xdata, order, sorder, nar, npred, init, opt, icmp, print >) ;

CALL TSMLOCAR (arcoef, ev, nar, aic, start, finish, data < ,maxlag, opt, missing, print >) ;

CALL TSMLOMAR (arcoef, ev, nar, aic, start, finish, data < ,maxlag, opt, missing, print >) ;

CALL TSMULMAR (arcoef, ev, nar, aic, data < ,maxlag, opt, missing, print >) ;

CALL TSPEARS (arcoef, ev, nar, aic, data < ,maxlag, opt, missing, print >) ;

CALL TSPRED (forecast, impulse, mse, data, coef, nar, nma < ,ev, npred, start, constant >) ;

CALL TSROOT (matout, matin, nar, nma < ,qcoef, print >) ;

CALL TSTVCAR (arcoef, variance, est, aic, data < ,nar, init, opt, outlier, print >) ;

CALL TSUNIMAR (arcoef, ev, nar, aic, data < ,maxlag, opt, missing, print >) ;

Details

This section presents an introductory description of the important topics that are directly related to TIMSAC
IML subroutines. The computational details, including algorithms, are described in the section Computa-
tional Details on page 295. A detailed explanation of each subroutine is not given; instead, basic ideas and
common methodologies for all subroutines are described first and are followed by more technical details.
Finally, missing values are discussed in the section Missing Values on page 300.

Minimum AIC Procedure

The AIC statistic is widely used to select the best model among alternative parametric models. The mini-
mum AIC model selection procedure can be interpreted as a maximization of the expected entropy (Akaike
1981). The entropy of a true probability density function (PDF) ' with respect to the fitted PDF f is written
as
B.'; f / D I.'; f /
where I.'; f / is a Kullback-Leibler information measure, which is defined as
Z   
'.z/
I.'; f / D log '.z/dz
f .z/
where the random variable Z is assumed to be continuous. Therefore,
B.'; f / D EZ log f .Z/ EZ log '.Z/
Details F 283

where B.'; f /  0 and EZ denotes the expectation concerning the random variable Z. B.'; f / D 0 if
and only if ' D f (a.s.). The larger the quantity EZ log f .Z/, the closer the function f is to the true
PDF '. Given the data y D .y1 ; : : : ; yT /0 that hasQthe same distribution as the random variable Z, let
the likelihood function of the parameter vector  be TtD1 f .yt j /. Then the average of the log-likelihood
function T1 TtD1 log f .yt j / is an estimate of the expected value of log f .Z/. Akaike (1981) derived
P
the alternative estimate of EZ log f .Z/ by using the Bayesian predictive likelihood. The AIC is the bias-
corrected estimate of 2T EZ log f .ZjO /, where O is the maximum likelihood estimate.

AIC D 2.maximumlog likelihood/ C 2.numberoffreeparameters/

Let  D .1 ; : : : ; K /0 be a K  1 parameter vector that is contained in the parameter space K . Given the
data y, the log-likelihood function is
T
X
`./ D log f .yt j /
t D1

Suppose the probability density function f .yj / has the true PDF '.y/ D f .yj 0 /, where the true pa-
rameter vector  0 is contained in K . Let OK be a maximum likelihood estimate. The maximum of the
log-likelihood function is denoted as `.OK / D max2K `. /. The expected log-likelihood function is
defined by

` ./ D T EZ log f .Zj /

The Taylor series expansion of the expected log-likelihood function around the true parameter  0 gives the
following asymptotic relationship:

A @ log f .Zj 0 / T
` ./ D ` . 0 / C T .  0 /0 EZ .  0 /0 I. 0 /.  0/
@ 2
A 0
where I. 0 / is the information matrix and D stands for asymptotic equality. Note that @ log f@.zj / D 0 since
log f .zj/ is maximized at  0 . By substituting OK , the expected log-likelihood function can be written as
A T O
` .OK / D ` . 0 / .K  0 /0 I. 0 /.OK  0/
2
The maximum likelihood estimator is asymptotically normally distributed under the regularity conditions
p d
T I. 0 /1=2 .OK  0 / ! N.0; IK /

Therefore,
a
T .OK  0 /0 I. 0 /.OK  0 /  2K

The mean expected log-likelihood function, ` .K/ D EY ` .OK /, becomes


A K
` .K/ D ` . 0 /
2
When the Taylor series expansion of the log-likelihood function around OK is used, the log-likelihood
function `./ is written
2

A O O 0 @`. / 1 O 0 @ `. /
OK /

`./ D `.K / C . K / C . K / .
@ OK 2 @ @ 0 OK
284 F Chapter 13: Time Series Analysis and Examples

 
@`. / 1 @2 `. /
Since `.OK / is the maximum log-likelihood function, @ O
D 0. Note that plim T @ @ 0 O D
K K

I. 0 / if the maximum likelihood estimator OK is a consistent estimator of  . Replacing  with the true
parameter  0 and taking expectations with respect to the random variable Y ,
A K
EY `. 0 / D EY `.OK /
2
Consider the following relationship:

` . 0 / D T EZ log f .Zj 0 /
T
X
D EY log f .Yt j 0 /
t D1
0
D EY `. /

From the previous derivation,


A K
` .K/ D ` . 0 /
2
Therefore,
A
` .K/ D EY `.OK / K
The natural estimator for EY `.OK / is `.OK /. Using this estimator, you can write the mean expected log-
likelihood function as
A
` .K/ D `.OK / K
Consequently, the AIC is defined as an asymptotically unbiased estimator of 2.meanexpectedlog likelihood/
AIC.K/ D 2`.OK / C 2K
In practice, the previous asymptotic
p result is expected to be valid in finite samples if the number of free
parameters does not exceed 2 T and the upper bound of the number of free parameters is T2 . It is worth
noting that the amount of AIC is not meaningful in itself, since this value is not the Kullback-Leibler
information measure. The difference of AIC values can be used to select the model. The difference of the
two AIC values is considered insignificant if it is far less than 1. It is possible to find a better model when
the minimum AIC model contains many free parameters.

Smoothness Priors Modeling

Consider the time series yt :


yt D f .t / C t
where f .t/ is an unknown smooth function and t is an i id random variable with zero mean and positive
variance  2 . Whittaker (1923) provides the solution, which balances a tradeoff between closeness to the
data and the kth-order difference equation. For a fixed value of  and k, the solution fO satisfies
T n
X o
min yt f .t /2 C 2 r k f .t /2
f
t D1
Details F 285

where r k denotes the kth-order difference operator. The value of  can be viewed as the smoothness
tradeoff measure. Akaike (1980a) proposed the Bayesian posterior PDF to solve this problem.
T T
( ) ( )
1 X 2 2 X k 2
`.f / D exp yt f .t / exp r f .t /
2 2 2 2
tD1 t D1

Therefore, the solution can be obtained when the function `.f / is maximized.
Assume that time series is decomposed as follows:
yt D Tt C St C t
where Tt denotes the trend component and St is the seasonal component. The trend component follows the
kth-order stochastically perturbed difference equation.
r k Tt D w1t ; w1t  N.0; 12 /
For example, the polynomial trend component for k D 2 is written as
Tt D 2Tt 1 Tt 2 C w1t

To accommodate regular seasonal effects, the stochastic seasonal relationship is used.


L
X1
St i D w2t w2t  N.0; 22 /
i D0

where L is the number of seasons within a period. In the context of Whittaker and Akaike, the smoothness
priors problem can be solved by the maximization of

T T
" # " #
1 X 2 12 X k 2
`.f / D exp .yt Tt St / exp .r Tt /
2 2 2 2
t D1 t D1
2
T L 1
!2 3
22 X X
 exp 4 St i 5
2 2
tD1 i D0

The values of hyperparameters 12 and 22 refer to a measure of uncertainty of prior information. For ex-
i2
ample, the large value of 12 implies a relatively smooth trend component. The ratio 2
.i D 1; 2/ can be
considered as a signal-to-noise ratio.
Kitagawa and Gersch (1984) use the Kalman filter recursive computation for the likelihood of the tradeoff
parameters. The hyperparameters are estimated by combining the grid search and optimization method.
The state space model and Kalman filter recursive computation are discussed in the section State Space
and Kalman Filter Method on page 298.

Bayesian Seasonal Adjustment

Seasonal phenomena are frequently observed in many economic and business time series. For example,
consumption expenditure might have strong seasonal variations because of Christmas spending. The sea-
sonal phenomena are repeatedly observed after a regular period of time. The number of seasons within a
286 F Chapter 13: Time Series Analysis and Examples

period is defined as the smallest time span for this repetitive observation. Monthly consumption expenditure
shows a strong increase during the Christmas season, with 12 seasons per period.
There are three major approaches to seasonal time series: the regression model, the moving average model,
and the seasonal ARIMA model.

Regression Model
Pm
Let the trend component be Tt D i D1 i Ui t and the seasonal component be
Pm
St D j D1 j Vjt . Then the additive time series can be written as the regression model

m m
X X
yt D i Ui t C j Vjt C t
i D1 j D1

In practice, the trend component can be written as the m th-order polynomial, such as
m
X
Tt D i t i
i D0

The seasonal component can be approximated by the seasonal dummies .Djt /

L
X1
St D j Djt
j D1

where L is the number of seasons within a period. The least squares method is applied to estimate parameters
i and j .
The seasonally adjusted series is obtained by subtracting the estimated seasonal component from the original
series. Usually, the error term t is assumed to be white noise, while sometimes the autocorrelation of the
regression residuals needs to be allowed. However, the regression method is not robust to the regression
function type, especially at the beginning and end of the series.

Moving Average Model

If you assume that the annual sum of a seasonal time series has small seasonal fluctuations, the nonseasonal
component Nt D Tt C t can be estimated by using the moving average method.
m
X
NO t D i yt i
iD m
Pm
where m is the positive integer and i is the symmetric constant such that i D  i and i D m i D 1.
When the data are not available, either an asymmetric moving average is used, or the forecast data are
augmented to use the symmetric weight. The X-11 procedure is a complex modification of this moving-
average method.
Details F 287

Seasonal ARIMA Model

The regression and moving-average approaches assume that the seasonal component is deterministic and
independent of other nonseasonal components. The time series approach is used to handle the stochastic
trend and seasonal components.
The general ARIMA model can be written
m
Y k
Y q
Y
j .B/ .1 B si /di yQt D 0 C i .B/t
j D1 i D1 i D1

where B is the backshift operator and

j .B/ D 1 1 B   j B pj
i .B/ D 1 1 B   i B qi

and yQt D yt E.Yt / if di D 0; otherwise, yQt D yt . The power of B, si , can be considered as a seasonal
factor. Specifically, the Box-Jenkins multiplicative seasonal ARIMA.p; d; q/.P; D; Q/s model is written
as

p .B/P .B s /.1 B/d .1 B s /D yQt D q .B/Q .B s /t

ARIMA modeling is appropriate for particular time series and requires burdensome computation.
The TSBAYSEA subroutine combines the simple characteristics of the regression approach and time series
modeling. The TSBAYSEA and X-11 procedures use the model-based seasonal adjustment. The symmetric
weights of the standard X-11 option can be approximated by using the integrated MA form

.1 B/.1 B 12 /yt D .B/t

With a fixed value , the TSBAYSEA subroutine is approximated as

.1 B/.1 B/.1 B 12 /yt D .B/t

The subroutine is flexible enough to handle trading-day or leap-year effects, the shift of the base observation,
and missing values. The TSBAYSEA-type modeling approach has some advantages: it clearly defines the
statistical model of the time series; modification of the basic model can be an efficient method of choosing
a particular procedure for the seasonal adjustment of a given time series; and the use of the concept of the
likelihood provides a minimum AIC model selection approach.

Nonstationary Time Series

The subroutines TSMLOCAR, TSMLOMAR, and TSTVCAR are used to analyze nonstationary time series
models. The AIC statistic is extensively used to analyze the locally stationary model.

Locally Stationary AR Model

When the time series is nonstationary, the TSMLOCAR (univariate) and TSMLOMAR (multivariate) sub-
routines can be employed. The whole span of the series is divided into locally stationary blocks of data,
288 F Chapter 13: Time Series Analysis and Examples

and then the TSMLOCAR and TSMLOMAR subroutines estimate a stationary AR model by using the least
squares method on this stationary block. The homogeneity of two different blocks of data is tested by using
the AIC.
Given a set of data fy1 ; : : : ; yT g, the data can be divided into k blocks of sizes t1 ; : : : ; tk , where t1 C    C
tk D T , and k and ti are unknown. The locally stationary model is fitted to the data
pi
X
yt D 0i C ji yt j C ti
j D1

where
i 1
X i
X
Ti 1 D tj < t  Ti D tj for i D 1; : : : ; k
j D1 j D1

where ti is a Gaussian white noise with Eti D 0 and E.ti /2 D i2 . Therefore, the log-likelihood function
of the locally stationary series is
2 0 12 3
k Ti p i
1 X6 2 1 X
@yt 0i
X
`D 4ti log.2i / C 2 ji yt j A 5
7
2 i t DT C1
i D1 j D1 i 1

Given ji , j D 0; : : : ; pi , the maximum of the log-likelihood function is attained at


0 12
Ti pi
1 X X
O i2 D @yt O 0i O ji yt jA
ti
t DTi 1 C1 j D1

The concentrated log-likelihood function is given by


k
T 1X
` D 1 C log.2/ ti log.O i2 /
2 2
i D1

Therefore, the maximum likelihood estimates, O ji and O i2 , are obtained by minimizing the following local
SSE:
0 12
XTi Xpi
SSE D @yt O 0i O ji yt j A
t DTi 1 C1 j D1

The least squares estimation of the stationary model is explained in the section Least Squares and House-
holder Transformation on page 295.
The AIC for the locally stationary model over the pooled data is written as
k
X k
X
ti log.O i2 / C 2 .pi C intercept C 1/
i D1 i D1

where intercept = 1 if the intercept term .0i / is estimated; otherwise, intercept = 0. The number of stationary
blocks (k), the size of each block (ti ), and the order of the locally stationary model is determined by the
Details F 289

AIC. Consider the autoregressive model fitted over the block of data, fy1 ; : : : ; yT g, and let this model M1 be
an AR(p1 ) process. When additional data, fyT C1 ; : : : ; yT CT1 g, are available, a new model M2 , an AR(p2 )
process, is fitted over this new data set, assuming that these data are independent of the previous data. Then
AICs for models M1 and M2 are defined as

AIC1 D T log.12 / C 2.p1 C intercept C 1/


AIC2 D T1 log.22 / C 2.p2 C intercept C 1/

The joint model AIC for M1 and M2 is obtained by summation

AICJ D AIC1 C AIC2

When the two data sets are pooled and estimated over the pooled data set, fy1 ; : : : ; yT CT1 g, the AIC of the
pooled model is

AICA D .T C T1 / log.O A2 / C 2.pA C intercept C 1/

where A2 is the pooled error variance and pA is the order chosen to fit the pooled data set.
Decision

 If AICJ < AICA , switch to the new model, since there is a change in the structure of the time series.

 If AICJ  AICA , pool the two data sets, since two data sets are considered to be homogeneous.

If new observations are available, repeat the preceding steps to determine the homogeneity of the data. The
basic idea of locally stationary AR modeling is that, if the structure of the time series is not changed, you
should use the additional information to improve the model fitting, but you need to follow the new structure
of the time series if there is any change.

Time-Varying AR Coefficient Model

Another approach to nonstationary time series, especially those that are nonstationary in the covariance, is
time-varying AR coefficient modeling. When the time series is nonstationary in the covariance, the problem
in modeling this series is related to an efficient parameterization. It is possible for a Bayesian approach to
estimate the model with a large number of implicit parameters of the complex structure by using a relatively
small number of hyperparameters.
The TSTVCAR subroutine uses smoothness priors by imposing stochastically perturbed difference equation
constraints on each AR coefficient and frequency response function. The variance of each AR coefficient
distribution constitutes a hyperparameter included in the state space model. The likelihood of these hyper-
parameters is computed by the Kalman filter recursive algorithm.
The time-varying AR coefficient model is written
m
X
yt D i t yt i C t
i D1
290 F Chapter 13: Time Series Analysis and Examples

where time-varying coefficients i t are assumed to change gradually with time. The following simple
stochastic difference equation constraint is imposed on each coefficient:

r k i t D wi t ; wi t  N.0;  2 /; i D 1; : : : ; m

The frequency response function of the AR process is written


m
X
A.f / D 1 jt exp. 2j if /
j D1

The smoothness of this function can be measured by the kth derivative smoothness constraint,

d k A.f / 2 m

Z 1=2 X
2k
Rk D df D .2/ j 2k jt
2

k

1=2 df j D1

Then the TSTVCAR call imposes zero and second derivative smoothness constraints. The time-varying AR
coefficients are the solution of the following constrained least squares:

T m
!2 T X
m  T X
m T X
m
X X X 2 X X
yt i t yt i C 2 r k i t C 2 i 2 i2t C  2 i2t
t D1 i D1 tD1 i D1 t D1 i D1 t D1 i D1

where  2 , 2 , and  2 are hyperparameters of the prior distribution.


Using a state space representation, the model is

xt D Fxt 1 C Gwt
yt D Ht xt C t

where

0
xt D .1t ; : : : ; mt ; : : : ; 1;t kC1 ; : : : ; m;t kC1 /
Ht D .yt 1 ; : : : ; yt m ; : : : ; 0; : : : ; 0/
wt D .w1t ; : : : ; wmt /0
k D 1 W F D Im G D Im
   
2Im Im Im
k D 2WFD GD
Im 0 0
2 3 2 3
3Im 3Im Im Im
k D 3 W F D 4 Im 0 0 5G D 4 0 5
0 Im 0 0
    2 
wt  I 0
 N 0;
t 0 2

The computation of the likelihood function is straightforward. See the section State Space and Kalman
Filter Method on page 298 for the computation method.
Details F 291

Multivariate Time Series Analysis

The subroutines TSMULMAR, TSMLOMAR, and TSPRED analyze multivariate time series. The periodic
AR model, TSPEARS, can also be estimated by using a vector AR procedure, since the periodic AR series
can be represented as the covariance-stationary vector autoregressive model.
The stationary vector AR model is estimated and the order of the model (or each variable) is automatically
determined by the minimum AIC procedure. The stationary vector AR model is written

yt D A0 C A1 yt 1 C    C Ap yt p C t
t  N.0; /

Using the LDL0 factorization method, the error covariance is decomposed as

D LDL0

where L is a unit lower triangular matrix and D is a diagonal matrix. Then the instantaneous response model
is defined as

Cyt D A0 C A1 yt 1 C    C Ap yt p C t

where C D L 1 , Ai D L 1 Ai for i D 0; 1; : : : ; p, and t D L 1 t . Each equation of the instantaneous


response model can be estimated independently, since its error covariance matrix has a diagonal covari-
ance matrix D. Maximum likelihood estimates are obtained through the least squares method when the
disturbances are normally distributed and the presample values are fixed.
The TSMULMAR subroutine estimates the instantaneous response model. The VAR coefficients are com-
puted by using the relationship between the VAR and instantaneous models.
The general VARMA model can be transformed as an infinite-order MA process under certain conditions.
1
X
yt D  C t C m t m
mD1

In the context of the VAR(p) model, the coefficient m can be interpreted as the m-lagged response of a
unit increase in the disturbances at time t.
@ytCm
m D
@t0

The lagged response on the one-unit increase in the orthogonalized disturbances t is denoted

@ytCm @E.yt Cm jyjt ; yj 1;t ; : : : ; Xt /


 D D m Lj
@jt @yjt

where Lj is the j th column of the unit triangular matrix L and Xt D yt 1 ; : : : ; yt p . When you estimate
the VAR model by using the TSMULMAR call, it is easy to compute this impulse response function.
The MSE of the m-step prediction is computed as

E.yt Cm yt Cmjt /.yt Cm ytCmjt /0 D C 1 10 C    C m 0


1 m 1
292 F Chapter 13: Time Series Analysis and Examples

Note that t D Lt . Then the covariance matrix of t is decomposed


n
X
D Li L0i di i
i D1

where di i is the i th diagonal element of the matrix D and n is the number of variables. The MSE matrix
can be written
n
X
di i Li L0i C 1 Li L0i 10 C    C m 0 0
 
1 Li Li m 1
i D1

Therefore, the contribution of the i th orthogonalized innovation to the MSE is

Vi D di i Li L0i C 1 Li L0i 10 C    C m 1 Li L0i m


0
 
1

The i th forecast error variance decomposition is obtained from diagonal elements of the matrix Vi .
The nonstationary multivariate series can be analyzed by the TSMLOMAR subroutine. The estimation and
model identification procedure is analogous to the univariate nonstationary procedure, which is explained
in the section Nonstationary Time Series on page 287.
A time series yt is periodically correlated with period d if Eyt D Eyt Cd and Eys yt D EysCd yt Cd . Let yt
be autoregressive of period d with AR orders .p1 ; : : : ; pd /that is,
pt
X
yt D jt yt j C t
j D1

where t is uncorrelated with mean zero and Et2 D t2 , pt D pt Cd , t2 D t2Cd , and jt D j;t Cd .j D
1; : : : ; pt /. Define the new variable such that xjt D yj Cd.t 1/ . The vector series, xt D .x1t ; : : : ; xdt /0 , is
autoregressive of order p, where p D maxj int..pj j /=d / C 1. The TSPEARS subroutine estimates the
periodic autoregressive model by using minimum AIC vector AR modeling.
The TSPRED subroutine computes the one-step or multistep forecast of the multivariate ARMA model if the
ARMA parameter estimates are provided. In addition, the subroutine TSPRED produces the (intermediate
and permanent) impulse response function and performs forecast error variance decomposition for the vector
AR model.

Spectral Analysis

The autocovariance function of the random variable Yt is defined as

CY Y .k/ D E.Yt Ck Yt /

where EYt D 0. When the real valued process Yt is stationary and its autocovariance is absolutely
summable, the population spectral density function is obtained by using the Fourier transform of the au-
tocovariance function
1
1 X
f .g/ D CY Y .k/ exp. igk/  g
2
kD 1
Details F 293

p
where
P1 i D 1 and CY Y .k/ is the autocovariance function such that
kD 1 jCY Y .k/j < 1.

Consider the autocovariance generating function


1
X
.z/ D CY Y .k/z k
kD 1

where CY Y .k/ D CY Y . k/ and z is a complex scalar. The spectral density function can be represented as
1
f .g/ D .exp. ig//
2
The stationary ARMA(p; q) process is denoted

.B/yt D .B/t t  .0;  2 /

where .B/ and .B/ do not have common roots. Note that the autocovariance generating function of the
linear process yt D .B/t is given by

.B/ D  2 .B/ .B 1
/
.B/
For the ARMA(p; q) process, .B/ D .B/ . Therefore, the spectral density function of the stationary
ARMA(p; q) process becomes

 2 .exp. ig//.exp.ig// 2

f .g/ D
2 .exp. ig//.exp.ig//
The spectral density function of a white noise is a constant.

2
f .g/ D
2
The spectral density function of the AR(1) process ..B/ D 1 1 B/ is given by

2
f .g/ D
2.1 1 cos.g/ C 12 /

The spectrum of the AR(1) process has its minimum at g D 0 and its maximum at g D  if 1 < 0,
while the spectral density function attains its maximum at g D 0 and its minimum at g D , if 1 > 0.
When the series is positively autocorrelated, its spectral density function is dominated by low frequencies.
2 1
It is interesting to observe that the spectrum approaches 4 1 cos.g/
as 1 ! 1. This relationship shows
that the series is difference-stationary if its spectral density function has a remarkable peak near 0.
The spectrum of AR(2) process ..B/ D 1 1 B 2 B 2 / equals

2 1
f .g/ D  i2 
2 h
1 .1 2 / .1C2 /2 .42 C12 /
42 cos.g/ C 42 C 42

Refer to Anderson (1971) for details of the characteristics of this spectral density function of the AR(2)
process.
294 F Chapter 13: Time Series Analysis and Examples

In practice, the population spectral density function cannot be computed. There are many ways of computing
the sample spectral density function. The TSBAYSEA and TSMLOCAR subroutines compute the power
spectrum by using AR coefficients and the white noise variance.
The power spectral density function of Yt is derived by using the Fourier transformation of CY Y .k/.
1
X 1 1
fY Y .g/ D exp. 2 igk/CY Y .k/; g
2 2
kD 1
p
where i D 1 and g denotes frequency. The autocovariance function can also be written as
Z 1=2
CY Y .k/ D exp.2 igk/fY Y .g/dg
1=2

Consider the following stationary AR(p) process:


p
X
yt i yt i D t
i D1

where t is a white noise with mean zero and constant variance  2 .


The autocovariance function of white noise t equals

C .k/ D k0  2

where k0 D 1 if k D 0; otherwise, k0 D 0. Therefore, the power spectral density of the white noise is
f .g/ D  2 , 12  g  12 . Note that, with 0 D 1,
p X
X p
C .k/ D m n CY Y .k m C n/
mD0 nD0

Using the following autocovariance function of Yt ,


Z 1=2
CY Y .k/ D exp.2 igk/fY Y .g/dg
1=2

the autocovariance function of the white noise is denoted as

p X
X p Z 1=2
C .k/ D m n exp.2 ig.k m C n//fY Y .g/dg
mD0 nD0 1=2
p
2
Z 1=2 X
D exp.2 igk/ 1 m exp. 2 igm/ fY Y .g/dg

1=2
mD1

On the other hand, another formula of the C .k/ gives


Z 1=2
C .k/ D exp.2 igk/f .g/dg
1=2
Details F 295

Therefore,
p
2
X
f .g/ D 1 m exp. 2 igm/ fY Y .g/


mD1

Since f .g/ D  2 , the rational spectrum of Yt is


2
fY Y .g/ D Pp 2
1  m exp. 2 igm/
mD1

To compute the power spectrum, estimated values of white noise variance O 2 and AR coefficients O m are
used. The order of the AR process can be determined by using the minimum AIC procedure.

Computational Details

Least Squares and Householder Transformation

Consider the univariate AR(p) process


p
X
yt D 0 C i yt i C t
i D1

Define the design matrix X.


2 3
1 yp  y1
6 :: :
:: :: ::
XD4 :
7
: : 5
1 yT 1    yT p

Let y D .ypC1 ; : : : ; yn /0 . The least squares estimate, aO D .X0 X/ 1 X0 y, is the approximation to the max-
imum likelihood estimate of a D .0 ; 1 ; : : : ; p / if t is assumed to be Gaussian error disturbances.
Combining X and y as
:
Z D X :: y

the Z matrix can be decomposed as


 
R w1
Z D QU D Q
0 w2

where Q is an orthogonal matrix and R is an upper triangular matrix, w1 D .w1 ; : : : ; wpC1 /0 , and w2 D
.wpC2 ; 0; : : : ; 0/0 .
2 3
w1
6 w2 7
Q0 y D 6
6 7
:: 7
4 : 5
wT p

The least squares estimate that uses Householder transformation is computed by solving the linear system

Ra D w1
296 F Chapter 13: Time Series Analysis and Examples

The unbiased residual variance estimate is


TXp 2
2 1 wpC2
O D wi2 D
T p T p
i DpC2

and

AIC D .T p/ log.O 2 / C 2.p C 1/

In practice, least squares estimation does not require the orthogonal matrix Q. The TIMSAC subroutines
compute the upper triangular matrix without computing the matrix Q.

Bayesian Constrained Least Squares

Consider the additive time series model

yt D Tt C St C t ; t  N.0;  2 /

Practically, it is not possible to estimate parameters a D .T1 ; : : : ; TT ; S1 ; : : : ; ST /0 , since the number of


m
parameters exceeds the number of available observations. Let rL denote the seasonal difference operator
m L m
with L seasons and degree of m; that is, rL D .1 B / . Suppose that T D L  n. Some constraints on
m
the trend and seasonal components need to be imposed such that the sum of squares of r k Tt , rL St , and
PL 1
. i D0 St i / is small. The constrained least squares estimates are obtained by minimizing

T n
X h io
.yt Tt St /2 C d 2 s 2 .r k Tt /2 C .rL
m
St /2 C z 2 .St C    C St LC1 /
2

t D1

Using matrix notation,

.y Ma/0 .y Ma/ C .a a0 /0 D0 D.a a0 /


:
where M D IT :: IT , y D .y1 ; : : : ; yT /0 , and a0 is the initial guess of a. The matrix D is a 3T  2T control
matrix in which structure varies according to the order of differencing in trend and season.
2 3
Em 0
D D d 4 zF 0 5
0 sGk
Details F 297

where

Em D Cm IL ; m D 1; 2; 3
2 3
1 0  0
: 7
6 1 1 : : : :: 7
6
F D 6
6 :: : : : :
7
7
4 : : : 0 5
1    1 1 T T
2 3
1 0 0  0
6 1
6 1 0  0 7 7
6 : : :
: 7
G1 D 6
6 0 1 1 : : 7
6 :: : : : : : :
7
7
4 : : : : 0 5
0  0 1 1 T T
2 3
1 0 0 0  0
6 2
6 1 0 0  0 7 7
6 1
6 2 1 0    0 7
7
G2 D 6
6 :: :: 7
6 0 1 2 1 : : 77
6 :: : : : : : : : : 7
4 : : : : : 0 5
0  0 1 2 1 T T
2 3
1 0 0 0 0  0
6 3
6 1 0 0 0  0 7 7
6 3
6 3 1 0 0    0 7
7
6 1 3 3 1 0  0 7
G3 D 6
6 7
:: :: 7
6 0
6 1 3 3 1 : : 77
6 :: : : : : : : : : : : 7
4 : : : : : : 0 5
0  0 1 3 3 1 T T

The n  n matrix Cm has the same structure as the matrix Gm , and IL is the L  L identity matrix. The
solution of the constrained least squares method is equivalent to that of maximizing the function
   
1 0 1 0 0
L.a/ D exp .y Ma/ .y Ma/ exp .a a0 / D D.a a0 /
2 2 2 2

Therefore, the PDF of the data y is


 T =2  T  
1 1 1
f .yj 2 ; a/ D exp .y Ma/0 .y Ma/
2  2 2

The prior PDF of the parameter vector a is


 T  2T  
2 1 1 0 1 0 0
.ajD;  ; a0 / D jD Dj exp .a a0 / D D.a a0 /
2  2 2

When the constant d is known, the estimate aO of a is the mean of the posterior distribution, where the
posterior PDF of the parameter a is proportional to the function L.a/. It is obvious that aO is the minimizer
298 F Chapter 13: Time Series Analysis and Examples

Q 0 .Qy
of kg.ajd /k2 D .Qy Da/ Q
Da/, where
 
y
yQ D
Da0
 
Q D M
D
D
The value of d is determined by the minimum ABIC procedure. The ABIC is defined as
 
1
ABIC D T log 2
kg.ajd /k C 2flogdet.D0 D C M0 M/ logdet.D0 D/g
T

State Space and Kalman Filter Method

In this section, the mathematical formulas for state space modeling are introduced. The Kalman filter
algorithms are derived from the state space model. As an example, the state space model of the TSDECOMP
subroutine is formulated.
Define the following state space model:

xt D Fxt 1 C Gwt
yt D Ht xt C t

where t  N.0;  2 / and wt  N.0; Q/. If the observations, .y1 ; : : : ; yT /, and the initial conditions, x0j0
and P0j0 , are available, the one-step predictor .xt jt 1 / of the state vector xt and its mean square error (MSE)
matrix Pt jt 1 are written as

xt jt 1 D Fxt 1jt 1

0
Pt jt 1 D FPt 1jt 1 F C GQG0
Using the current observation, the filtered value of xt and its variance Pt jt are updated.

xt jt D xt jt 1 C Kt et

Pt jt D .I Kt Ht /Ptjt 1
0 0
where et D yt Ht xt jt 1 and Kt D Pt jt 1 Ht Ht Pt jt 1 Ht C  2 I 1. The log-likelihood function is
computed as
T T
1X X et2
`D log.2vt jt 1/
2 2vt jt 1
t D1 t D1

where vt jt 1 is the conditional variance of the one-step prediction error et .


Consider the additive time series decomposition

yt D Tt C St C TDt C ut C x0t t C t
Details F 299

where xt is a .K  1/ regressor vector and t is a .K  1/ time-varying coefficient vector. Each component


has the following constraints:

r k Tt D w1t ; w1t  N.0; 12 /


m
rL St D w2t ; w2t  N.0; 22 /
Xp
ut D i ut i C w3t ; w3t  N.0; 32 /
i D1
2
jt D j;t 1 C w3Cj;t ; w3Cj;t  N.0; 3Cj /; j D 1;    ; K
7
X 6
X
i t TDt .i / D i t .TDt .i / TDt .7//
i D1 i D1
i t D i;t 1

m
where r k D .1 B/k and rL D .1 B L /m . The AR component ut is assumed to be stationary. The
trading-day component TDt .i / represents the number of the ith day of the week in time t . If k D 3; p D
3; m D 1, and L D 12 (monthly data),

Tt D 3Tt 1 3Tt 2 C Tt 3 C w1t


11
X
St i D w2t
i D0
3
X
ut D i ut i C w3t
i D1

The state vector is defined as


0
xt D .Tt ; Tt 1 ; Tt 2 ; St ; : : : ; St 11 ; ut ; ut 1 ; ut 2 ; 1t ; : : : ; 6t /

The matrix F is
2 3
F1 0 0 0
6 0 F2 0 0 7
FD6 4 0 0 F3 0
7
5
0 0 0 F4

where

2 3
3 3 1
F1 D 1
4 0 0 5
0 1 0

10
 
1
F2 D
I10 0
2 3
1 2 3
F3 D 4 1 0 0 5
0 1 0
300 F Chapter 13: Time Series Analysis and Examples

F4 D I6
10 D .1; 1; : : : ; 1/

The matrix G can be denoted as


g1 0 0
2 3
6 0 g2 0 7
6 7
6 0 0 g3 7
6 7
6 0 0 0 7
6 7
GD6 6 0 0 0 7
7
6 0 0 0 7
6 7
6 0 0 0 7
6 7
4 0 0 0 5
0 0 0

where
 0
g1 D g3 D 1 0 0
 0
g2 D 1 0 0 0 0 0
Finally, the matrix Ht is time-varying,

1 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 h0t
 
Ht D

where
 0
ht D Dt .1/ Dt .2/ Dt .3/ Dt .4/ Dt .5/ Dt .6/
Dt .i/ D TDt .i / TDt .7/; i D 1; : : : ; 6

Missing Values

The TIMSAC subroutines skip any missing values at the beginning of the data set. When the univariate
and multivariate AR models are estimated via least squares (TSMLOCAR, TSMLOMAR, TSUNIMAR,
TSMULMAR, and TSPEARS), there are three options available; that is, MISSING=0, MISSING=1, or
MISSING=2. When the MISSING=0 (default) option is specified, the first contiguous observations with
no missing values are used. The MISSING=1 option specifies that only nonmissing observations should be
used by ignoring the observations with missing values. If the MISSING=2 option is specified, the missing
values are filled with the sample mean. The least squares estimator with the MISSING=2 option is biased
in general.
The BAYSEA subroutine assumes the same prior distribution of the trend and seasonal components that
correspond to the missing observations. A modification is made to skip the components of the vector
g.ajd / that correspond to the missing observations. The vector g.ajd / is defined in the section Bayesian
Constrained Least Squares on page 296. In addition, the TSBAYSEA subroutine considers outliers as
missing values. The TSDECOMP and TSTVCAR subroutines skip the Kalman filter updating equation
when the current observation is missing.
Details F 301

ISM TIMSAC Packages

A description of each TIMSAC package follows. Each description includes a list of the programs provided
in the TIMSAC version.

TIMSAC-72
The TIMSAC-72 package analyzes and controls feedback systems (for example, a cement kiln pro-
cess). Univariate- and multivariate-AR models are employed in this original TIMSAC package. The
final prediction error (FPE) criterion is used for model selection.

 AUSPEC estimates the power spectrum by the Blackman-Tukey procedure.


 AUTCOR computes autocovariance and autocorrelation.
 DECONV computes the impulse response function.
 FFTCOR computes autocorrelation and crosscorrelation via the fast Fourier transform.
 FPEAUT computes AR coefficients and FPE for the univariate AR model.
 FPEC computes AR coefficients and FPE for the control system or multivariate AR model.
 MULCOR computes multiple covariance and correlation.
 MULNOS computes relative power contribution.
 MULRSP estimates the rational spectrum for multivariate data.
 MULSPE estimates the cross spectrum by Blackman-Tukey procedure.
 OPTDES performs optimal controller design.
 OPTSIM performs optimal controller simulation.
 RASPEC estimates the rational spectrum for univariate data.
 SGLFRE computes the frequency response function.
 WNOISE performs white noise simulation.

TIMSAC-74
The TIMSAC-74 package estimates and forecasts univariate and multivariate ARMA models by fit-
ting the canonical Markovian model. A locally stationary autoregressive model is also analyzed.
Akaikes information criterion (AIC) is used for model selection.

 AUTARM performs automatic univariate ARMA model fitting.


 BISPEC computes bispectrum.
 CANARM performs univariate canonical correlation analysis.
 CANOCA performs multivariate canonical correlation analysis.
 COVGEN computes the covariance from gain function.
 FRDPLY plots the frequency response function.
 MARKOV performs automatic multivariate ARMA model fitting.
 NONST estimates the locally stationary AR model.
 PRDCTR performs ARMA model prediction.
 PWDPLY plots the power spectrum.
302 F Chapter 13: Time Series Analysis and Examples

 SIMCON performs optimal controller design and simulation.


 THIRMO computes the third-order moment.

TIMSAC-78
The TIMSAC-78 package uses the Householder transformation to estimate time series models. This
package also contains Bayesian modeling and the exact maximum likelihood estimation of the ARMA
model. Minimum AIC or Akaike Bayesian information criterion (ABIC) modeling is extensively
used.

 BLOCAR estimates the locally stationary univariate AR model by using the Bayesian method.
 BLOMAR estimates the locally stationary multivariate AR model by using the Bayesian method.
 BSUBST estimates the univariate subset regression model by using the Bayesian method.
 EXSAR estimates the univariate AR model by using the exact maximum likelihood method.
 MLOCAR estimates the locally stationary univariate AR model by using the minimum AIC
method.
 MLOMAR estimates the locally stationary multivariate AR model by using the minimum AIC
method.
 MULBAR estimates the multivariate AR model by using the Bayesian method.
 MULMAR estimates the multivariate AR model by using the minimum AIC method.
 NADCON performs noise adaptive control.
 PERARS estimates the periodic AR model by using the minimum AIC method.
 UNIBAR estimates the univariate AR model by using the Bayesian method.
 UNIMAR estimates the univariate AR model by using the minimum AIC method.
 XSARMA estimates the univariate ARMA model by using the exact maximum likelihood
method.

In addition, the following test subroutines are available: TSSBST, TSWIND, TSROOT, TSTIMS, and
TSCANC.

TIMSAC-84
The TIMSAC-84 package contains the Bayesian time series modeling procedure, the point process
data analysis, and the seasonal adjustment procedure.

 ADAR estimates the amplitude dependent AR model.


 BAYSEA performs Bayesian seasonal adjustments.
 BAYTAP performs Bayesian tidal analysis.
 DECOMP performs time series decomposition analysis by using state space modeling.
 EPTREN estimates intensity rates of either the exponential polynomial or exponential Fourier
series of the nonstationary Poisson process model.
 LINLIN estimates linear intensity models of the self-exciting point process with another process
input and with cyclic and trend components.
 LINSIM performs simulation of the point process estimated by the subroutine LINLIN.
Example 13.1: VAR Estimation and Variance Decomposition F 303

 LOCCAR estimates the locally constant AR model.


 MULCON performs simulation, control, and prediction of the multivariate AR model.
 NONSPA performs nonstationary spectrum analysis by using the minimum Bayesian AIC pro-
cedure.
 PGRAPH performs graphical analysis for point process data.
 PTSPEC computes periodograms of point process data with significant bands.
 SIMBVH performs simulation of bivariate Hawkes mutually exciting point process.
 SNDE estimates the stochastic nonlinear differential equation model.
 TVCAR estimates the time-varying AR coefficient model by using state space modeling.

Refer to Kitagawa and Akaike (1981) and Ishiguro (1987) for more information about TIMSAC pro-
grams.

Example 13.1: VAR Estimation and Variance Decomposition

In this example, a VAR model is estimated and forecast. The VAR(3) model is estimated by using invest-
ment, durable consumption, and consumption expenditures. The data are found in the appendix to Ltkepohl
(1993). The stationary VAR(3) process is specified as

yt D A0 C A1 yt 1 C A2 yt 2 C A 3 yt 3 C t

The matrix ARCOEF contains the AR coefficients (A1 ,A2 , and A3 ), and the matrix EV contains error
covariance estimates. An intercept vector A0 is included in the first row of the matrix ARCOEF if OPT[1]=1
is specified. Here is the code:

data one;
input invest income consum @@;
datalines;
180 451 415 179 465 421 185 485 434 192 493 448
211 509 459 202 520 458 207 521 479 214 540 487
231 548 497 229 558 510 234 574 516 237 583 525
206 591 529 250 599 538 259 610 546 263 627 555
264 642 574 280 653 574 282 660 586 292 694 602
286 709 617 302 734 639 304 751 653 307 763 668
317 766 679 314 779 686 306 808 697 304 785 688
292 794 704 275 799 699 273 799 709 301 812 715
280 837 724 289 853 746 303 876 758 322 897 779
315 922 798 339 949 816 364 979 837 371 988 858
375 1025 881 432 1063 905 453 1104 934 460 1131 968
475 1137 983 496 1178 1013 494 1211 1034 498 1256 1064
526 1290 1101 519 1314 1102 516 1346 1145 531 1385 1173
573 1416 1216 551 1436 1229 538 1462 1242 532 1493 1267
558 1516 1295 524 1557 1317 525 1613 1355 519 1642 1371
526 1690 1402 510 1759 1452 519 1756 1485 538 1780 1516
549 1807 1549 570 1831 1567 559 1873 1588 584 1897 1631
304 F Chapter 13: Time Series Analysis and Examples

611 1910 1650 597 1943 1685 603 1976 1722 619 2018 1752
635 2040 1774 658 2070 1807 675 2121 1831 700 2132 1842
692 2199 1890 759 2253 1958 782 2276 1948 816 2318 1994
844 2369 2061 830 2423 2056 853 2457 2102 852 2470 2121
833 2521 2145 860 2545 2164 870 2580 2206 830 2620 2225
801 2639 2235 824 2618 2237 831 2628 2250 830 2651 2271
;

proc iml;
use one;
read all into y var{invest income consum};
mdel = 1;
maice = 0;
misw = 0; /*-- instantaneous modeling ? --*/
call tsmulmar(arcoef,ev,nar,aic) data=y maxlag=3
opt=(mdel || maice || misw) print=1;
arcoef = j(9,3,0);
do i=1 to 9;
do j=1 to 3;
arcoef[i,j] = arcoef_l[i+1,j];
end;
end;
print arcoef;
misw = 1;
call tsmulmar(arcoef,ev,nar,aic) data=y maxlag=3
opt=(mdel || maice || misw) print=1;
print ev;

To obtain the unit triangular matrix L 1 and diagonal matrix Dt , you need to estimate the instantaneous
response model. When you specify the OPT[3]=1 option, the first row of the output matrix EV contains
error variances of the instantaneous response model, while the unit triangular matrix is in the second through
the fifth rows. See Output 13.1.1. Here is the code:

Output 13.1.1 Error Variance and Unit Triangular Matrix

VAR Estimation and Variance Decomposition

ev

295.21042 190.94664 59.361516


1 0 0
-0.02239 1 0
-0.256341 -0.500803 1

In Output 13.1.2 and Output 13.1.3, you can see the relationship between the instantaneous response model
and the VAR model. The VAR coefficients are computed as Ai D LAi .i D 0; 1; 2; 3/, where Ai is a
coefficient matrix of the instantaneous model. For example, you can verify this result by using the first lag
coefficient matrix .A1 /.
Example 13.1: VAR Estimation and Variance Decomposition F 305

2 3 2 3 12 3
0:886 0:340 0:014 1:000 0 0 0:886 0:340 0:014
4 0:168 1:050 0:107 5 D 4 0:022 1:000 0 5 4 0:149 1:043 0:107 5
0:089 0:459 0:447 0:256 0:501 1:000 0:222 0:154 0:397

proc iml;
mdel = 1;
maice = 0;
misw = 0;
call tsmulmar(arcoef,ev,nar,aic) data=y maxlag=3
opt=(mdel || maice || misw);
call tspred(forecast,impulse0,mse,y,arcoef,nar,0,ev)
npred=10 start=nrow(y) constant=mdel;

Output 13.1.2 VAR Estimates

arcoef

0.8855926 0.3401741 -0.014398


0.1684523 1.0502619 0.107064
0.0891034 0.4591573 0.4473672
-0.059195 -0.298777 0.1629818
0.1128625 -0.044039 -0.088186
0.1684932 -0.025847 -0.025671
0.0637227 -0.196504 0.0695746
-0.226559 0.0532467 -0.099808
-0.303697 -0.139022 0.2576405

Output 13.1.3 Instantaneous Response Model Estimates

arcoef

0.885593 0.340174 -0.014398


0.148624 1.042645 0.107386
-0.222272 -0.154018 0.39744
-0.059195 -0.298777 0.162982
0.114188 -0.037349 -0.091835
0.127145 0.072796 -0.023287
0.063723 -0.196504 0.069575
-0.227986 0.057646 -0.101366
-0.20657 -0.115316 0.28979

When the VAR estimates are available, you can forecast the future values by using the TSPRED call. As
a default, the one-step predictions are produced until the START= point is reached. The NPRED=h option
specifies how far you want to predict. The prediction error covariance matrix MSE contains h mean square
error matrices. The output matrix IMPULSE contains the estimate of the coefficients .i / of the infinite
MA process. The following IML code estimates the VAR(3) model and performs 10-step-ahead prediction.

mdel = 1;
306 F Chapter 13: Time Series Analysis and Examples

maice = 0;
misw = 0;
call tsmulmar(arcoef,ev,nar,aic) data=y maxlag=3
opt=(mdel || maice || misw);
call tspred(forecast,impulse,mse,y,arcoef,nar,0,ev)
npred=10 start=nrow(y) constant=mdel;
print impulse;

The lagged effects of a unit increase in the error disturbances are included in the matrix IMPULSE. For
example:
2 3
0:781100 0:353140 0:180211
@ytC2
D 4 0:448501 1:165474 0:069731 5
@t0
0:364611 0:692111 0:222342

Output 13.1.4 displays the first 15 rows of the matrix IMPULSE.

Output 13.1.4 Moving-Average Coefficients: MA(0) MA(4)

impulse

1 0 0
0 1 0
0 0 1
0.8855926 0.3401741 -0.014398
0.1684523 1.0502619 0.107064
0.0891034 0.4591573 0.4473672
0.7810999 0.3531397 0.1802109
0.4485013 1.1654737 0.0697311
0.3646106 0.6921108 0.2223425
0.8145483 0.243637 0.2914643
0.4997732 1.3625363 -0.018202
0.2775237 0.7555914 0.3885065
0.7960884 0.2593068 0.260239
0.5275069 1.4134792 0.0335483
0.267452 0.8659426 0.3190203

In addition, you can compute the lagged response on the one-unit increase in the orthogonalized disturbances
t .
@ytCm @E.yt Cm jyjt ; yj 1;t ; : : : ; Xt /
 D D m Lj
@jt @yjt

When the error matrix EV is obtained from the instantaneous response model, you need to convert the matrix
IMPULSE. The first 15 rows of the matrix ORTH_IMP are shown in Output 13.1.5. Note that the matrix
constructed from the last three rows of EV become the matrix L 1 . Here is the code:

call tsmulmar(arcoef,ev,nar,aic) data=y maxlag=3


opt={1 0 1};
lmtx = inv(ev[2:nrow(ev),]);
orth_imp = impulse * lmtx;
Example 13.1: VAR Estimation and Variance Decomposition F 307

print orth_imp;

Output 13.1.5 Transformed Moving-Average Coefficients

orth_imp

1 0 0
0.0223902 1 0
0.267554 0.5008031 1
0.889357 0.3329638 -0.014398
0.2206132 1.1038799 0.107064
0.219079 0.6832001 0.4473672
0.8372229 0.4433899 0.1802109
0.4932533 1.2003953 0.0697311
0.4395957 0.8034606 0.2223425
0.8979858 0.3896033 0.2914643
0.5254106 1.3534206 -0.018202
0.398388 0.9501566 0.3885065
0.8715223 0.3896353 0.260239
0.5681309 1.4302804 0.0335483
0.3721958 1.025709 0.3190203

You can verify the result for the case of

@ytC2 @E.yt C2 jy2t ; y1t ; : : : ; Xt /


 D D 2 L2
@2t @y2t

using the simple computation


2 3 2 32 3
0:443390 0:781100 0:353140 0:180211 0:000000
4 1:200395 5 D 4 0:448501 1:165474 0:069731 5 4 1:000000 5
0:803461 0:364611 0:692111 0:222342 0:500803

The contribution of the i th orthogonalized innovation to the mean square error matrix of the 10-step forecast
is computed by using the formula

di i Li L0i C 1 Li L0i 10 C    C 9 Li L0i 90

In Output 13.1.6, diagonal elements of each decomposed MSE matrix are displayed as the matrix CONTRIB
as well as those of the MSE matrix (VAR). Here is the code:

mse1 = j(3,3,0);
mse2 = j(3,3,0);
mse3 = j(3,3,0);
do i = 1 to 5;
psi = impulse[(i-1)*3+1:3*i,];
mse1 = mse1 + psi*lmtx[,1]*lmtx[,1]`*psi`;
mse2 = mse2 + psi*lmtx[,2]*lmtx[,2]`*psi`;
mse3 = mse3 + psi*lmtx[,3]*lmtx[,3]`*psi`;
end;
308 F Chapter 13: Time Series Analysis and Examples

mse1 = ev[1,1]#mse1;
mse2 = ev[1,2]#mse2;
mse3 = ev[1,3]#mse3;
contrib = vecdiag(mse1) || vecdiag(mse2) || vecdiag(mse3);
var = vecdiag(mse[28:30,]);
print contrib var;

Output 13.1.6 Orthogonal Innovation Contribution

contrib var

1197.9131 116.68096 11.003194 2163.7104


263.12088 1439.1551 1.0555626 4573.9809
180.09836 633.55931 89.177905 2466.506

The investment innovation contribution to its own variable is 1879.3774, and the income innovation contri-
bution to the consumption expenditure is 1916.1676. It is easy to understand the contribution of innovations
in the ith variable to MSE when you compute the innovation account. In Output 13.1.7, innovations in the
first variable (investment) explain 20.45% of the error variance of the second variable (income), while the
innovations in the second variable explain 79.5% of its own error variance. It is straightforward to construct
the general multistep forecast error variance decomposition. Here is the code:

account = contrib * 100 / (var@j(1,3,1));


print account;

Output 13.1.7 Innovation Account

account

55.363835 5.3926331 0.5085336


5.7525574 31.463951 0.0230775
7.3017604 25.68651 3.615556

Kalman Filter Subroutines

This section describes a collection of Kalman filtering and smoothing subroutines for time series analysis;
immediately following are three examples using Kalman filtering subroutines. The state space model is a
method for analyzing a wide range of time series models. When the time series is represented by the state
space model (SSM), the Kalman filter is used for filtering, prediction, and smoothing of the state vector.
The state space model is composed of the measurement and transition equations.
The following Kalman filtering and smoothing subroutines are supported:

KALCVF performs covariance filtering and prediction.


Getting Started F 309

KALCVS performs fixed-interval smoothing.


KALDFF performs diffuse covariance filtering and prediction.
KALDFS performs diffuse fixed-interval smoothing.

Getting Started

The measurement (or observation) equation can be written

yt D bt C Ht zt C t

where bt is an Ny  1 vector, Ht is an Ny  Nz matrix, the sequence of observation noise t is independent,


zt is an Nz  1 state vector, and yt is an Ny  1 observed vector.
The transition (or state) equation is denoted as a first-order Markov process of the state vector.

zt C1 D at C Ft zt C t

where at is an Nz  1 vector, Ft is an Nz  Nz transition matrix, and the sequence of transition noise t


is independent. This equation is often called a shifted transition equation because the state vector is shifted
forward one time period. The transition equation can also be denoted by using an alternative specification

zt D at C Ft zt 1 C t

There is no real difference between the shifted transition equation and this alternative equation if the obser-
vation noise and transition equation noise are uncorrelatedthat is, E.t t0 / D 0. It is assumed that

E.t 0s / D Vt t s
E.t s0 / D Rt t s
E.t s0 / D Gt t s

where

1 if t D s
t s D
0 if t s

De Jong (1991) proposed a diffuse Kalman filter that can handle an arbitrarily large initial state covariance
matrix. The diffuse initial state assumption is reasonable if you encounter the case of parameter uncertainty
or SSM nonstationarity. The SSM of the diffuse Kalman filter is written

yt D Xt C Ht zt C t
ztC1 D Wt C Ft zt C t
z0 D a C A
D b C B
310 F Chapter 13: Time Series Analysis and Examples

where is a random variable with a mean of  and a variance of  2 . When ! 1, the SSM is said to
be diffuse.
The KALCVF call computes the one-step prediction zt C1jt and the filtered estimate ztjt , together with their
covariance matrices Pt C1jt and Pt jt , using forward recursions. You can obtain the k-step prediction zt Ckjt
and its covariance matrix Pt Ckjt with the KALCVF call. The KALCVS call uses backward recursions to
compute the smoothed estimate zt jT and its covariance matrix Pt jT when there are T observations in the
complete data.
The KALDFF call produces one-step prediction of the state and the unobserved random vector as well
as their covariance matrices. The KALDFS call computes the smoothed estimate zt jT and its covariance
matrix Pt jT .

Syntax

CALL KALCVF (pred, vpred, filt, vfilt, data, lead, a, f , b , h, var < , z0, vz0 >) ;

CALL KALCVS (sm, vsm, data, a, f , b , h, var, pred, vpred < ,un, vun >) ;

CALL KALDFF (pred, vpred, initial, s2, data, lead, int, coef, var, intd, coefd < , n0, at, mt, qt >) ;

CALL KALDFS (sm, vsm, data, int, coef, var, bvec, bmat, initial, at, mt, s2 < , un, vun >) ;

Example 13.2: Kalman Filtering: Likelihood Function Evaluation

In the following example, the log-likelihood function of the SSM is computed by using prediction error
decomposition. The annual real GNP series, yt , can be decomposed as

yt D t C t

where t is a trend component and t is a white noise error with t  .0; 2 /. Refer to Nelson and
Plosser (1982) for more details about these data. The trend component is assumed to be generated from the
following stochastic equations:

t D t 1 C t 1 C 1t
t D t 1 C 2t

where 1t and 2t are independent white noise disturbances with 1t  .0; 21 / and 2t  .0; 22 /.
It is straightforward to construct the SSM of the real GNP series.

yt D Hzt C t
zt D Fzt 1 C t
Example 13.2: Kalman Filtering: Likelihood Function Evaluation F 311

where

H D .1; 0/
 
1 1
F D
0 1

zt D .t ; t /0
tD .1t ; 2t /0
2 2 3
  1 0 0
t 2
Var D 4 0 2 0 5
t
0 0 2

When the observation noise t is normally distributed, the average log-likelihood function of the SSM is

T
1 X
` D `t
T
t D1

Ny 1 1 0 1
`t D log.2/ log.jCt j/ O C Ot
2 2 2 t t

0
where Ct is the mean square error matrix of the prediction error Ot , such that Ct D HPt jt 1H C Rt .
The LIK module computes the average log-likelihood function. First, the average log-likelihood function is
computed by using the default initial values: Z0=0 and VZ0=106 I. The second call of module LIK produces
the average log-likelihood function with the given initial conditions: Z0=0 and VZ0=10 3 I. You can notice
a sizable difference between the uncertain initial condition (VZ0=106 I) and the almost deterministic initial
condition (VZ0=10 3 I) in Output 13.2.1.
Finally, the first 15 observations of one-step predictions, filtered values, and real GNP series are produced
under the moderate initial condition (VZ0=10I). The data are the annual real GNP for the years 1909 to
1969. Here is the code:

title 'Likelihood Evaluation of SSM';


title2 'DATA: Annual Real GNP 1909-1969';
data gnp;
input y @@;
datalines;
116.8 120.1 123.2 130.2 131.4 125.6 124.5 134.3
135.2 151.8 146.4 139.0 127.8 147.0 165.9 165.5
179.4 190.0 189.8 190.9 203.6 183.5 169.3 144.2
141.5 154.3 169.5 193.0 203.2 192.9 209.4 227.2
263.7 297.8 337.1 361.3 355.2 312.6 309.9 323.7
324.1 355.3 383.4 395.1 412.8 406.0 438.0 446.1
452.5 447.3 475.9 487.7 497.2 529.8 551.0 581.1
617.8 658.1 675.2 706.6 724.7
;
run;
312 F Chapter 13: Time Series Analysis and Examples

proc iml;
start lik(y,a,b,f,h,var,z0,vz0);
nz = nrow(f);
n = nrow(y);
k = ncol(y);
const = k*log(8*atan(1));
if ( sum(z0 = .) | sum(vz0 = .) ) then
call kalcvf(pred,vpred,filt,vfilt,y,0,a,f,b,h,var);
else
call kalcvf(pred,vpred,filt,vfilt,y,0,a,f,b,h,var,z0,vz0);
et = y - pred*h`;
sum1 = 0;
sum2 = 0;
do i = 1 to n;
vpred_i = vpred[(i-1)*nz+1:i*nz,];
et_i = et[i,];
ft = h*vpred_i*h` + var[nz+1:nz+k,nz+1:nz+k];
sum1 = sum1 + log(det(ft));
sum2 = sum2 + et_i*inv(ft)*et_i`;
end;
return(-.5*const-.5*(sum1+sum2)/n);
finish;

use gnp;
read all var {y};
close gnp;

f = {1 1, 0 1};
h = {1 0};
a = j(nrow(f),1,0);
b = j(nrow(h),1,0);
var = diag(j(1,nrow(f)+ncol(y),1e-3));
/*-- initial values are computed --*/
z0 = j(1,nrow(f),.);
vz0 = j(nrow(f),nrow(f),.);
logl = lik(y,a,b,f,h,var,z0,vz0);
print 'No initial values are given', logl;
/*-- initial values are given --*/
z0 = j(1,nrow(f),0);
vz0 = 1e-3#i(nrow(f));
logl = lik(y,a,b,f,h,var,z0,vz0);
print 'Initial values are given', logl;
z0 = j(1,nrow(f),0);
vz0 = 10#i(nrow(f));
call kalcvf(pred0,vpred,filt0,vfilt,y,1,a,f,b,h,var,z0,vz0);

y0 = y;
free y;
y = j(16,1,0);
pred = j(16,2,0);
filt = j(16,2,0);
do i=1 to 16;
y[i] = y0[i];
Example 13.2: Kalman Filtering: Likelihood Function Evaluation F 313

pred[i,] = pred0[i,];
filt[i,] = filt0[i,];
end;
print y pred filt;
quit;

Output 13.2.1 Average Log Likelihood of SSM

Likelihood Evaluation of SSM


DATA: Annual Real GNP 1909-1969

No initial values are given

logl

-26313.74

Initial values are given

logl

-91883.49

Output 13.2.2 shows the observed data, the predicted state vectors, and the filtered state vectors for the first
16 observations.
Output 13.2.2 Filtering and One-Step Prediction

y pred filt

116.8 0 0 116.78832 0
120.1 116.78832 0 120.09967 3.3106857
123.2 123.41035 3.3106857 123.22338 3.1938303
130.2 126.41721 3.1938303 129.59203 4.8825531
131.4 134.47459 4.8825531 131.93806 3.5758561
125.6 135.51391 3.5758561 127.36247 -0.610017
124.5 126.75246 -0.610017 124.90123 -1.560708
134.3 123.34052 -1.560708 132.34754 3.0651076
135.2 135.41265 3.0651076 135.23788 2.9753526
151.8 138.21324 2.9753526 149.37947 8.7100967
146.4 158.08957 8.7100967 148.48254 3.7761324
139 152.25867 3.7761324 141.36208 -1.82012
127.8 139.54196 -1.82012 129.89187 -6.776195
147 123.11568 -6.776195 142.74492 3.3049584
165.9 146.04988 3.3049584 162.36363 11.683345
165.5 174.04698 11.683345 167.02267 8.075817
314 F Chapter 13: Time Series Analysis and Examples

Example 13.3: Kalman Filtering: SSM Estimation With the EM Algorithm

The following example estimates the normal SSM of the mink-muskrat data by using the EM algorithm.
The mink-muskrat series are detrended. Refer to Harvey (1989) for details of this data set. Since this
EM algorithm uses filtering and smoothing, you can learn how to use the KALCVF and KALCVS calls to
analyze the data. Consider the bivariate SSM:

yt D Hzt C t
zt D Fzt 1 C t

where H is a 2  2 identity matrix, the observation noise has a time-invariant covariance matrix R, and the
covariance matrix of the transition equation is also assumed to be time invariant. The initial state z0 has
mean  and covariance . For estimation, the matrix is fixed as
 
0:1 0:0
0:0 0:1
while the mean vector  is updated by the smoothing procedure such that O D z0jT . Note that this estima-
tion requires an extra smoothing step since the usual smoothing procedure does not produce zT j0 .
The EM algorithm maximizes the expected log-likelihood function given the current parameter estimates.
In practice, the log-likelihood function of the normal SSM is evaluated while the parameters are updated by
using the M-step of the EM maximization

Fi C1 D St .1/St 1 .0/ 1
1
Vi C1 D St .0/ St .1/St 1 .0/ 1 S0t .1/

T
T
1 X
Ri C1 D .yt Hzt jT /.yt Hzt jT /0 C HPt jT H0

T
t D1
i C1 D z0jT

where the index i represents the current iteration number, and

T
X
St .0/ D .Pt jT C ztjT z0t jT /;
t D1
T
X
St .1/ D .Pt;t 1jT C zt jT z0t 1jT /
t D1

It is necessary to compute the value of Pt;t 1jT recursively such that


0
Pt 1;t 2jT D Pt 1jt 1 Pt 2 C Pt 1 .Pt;t 1jT FPt 0
1jt 1 /Pt 2

where Pt D Pt jt F0 Pt C1jt and the initial value PT;T 1jT is derived by using the formula

PT;T 1jT D I Ptjt 1 H0 .HPt jt 1 H0 C R/ 1 H FPT 1jT 1


 
Example 13.3: Kalman Filtering: SSM Estimation With the EM Algorithm F 315

Note that the initial value of the state vector is updated for each iteration

z1j0 D Fi
P1j0 D Fi Fi 0 C Vi

The objective function value is computed as 2` in the IML module LIK. The log-likelihood function is
written
T T
1X 1X 1 0
`D log.jCt j/ .yt Hzt jt 1 /Ct .yt Hztjt 1/
2 2
t D1 tD1
0
where Ct D HPt jt 1H C R.
The iteration history is shown in Output 13.3.1. As shown in Output 13.3.2, the eigenvalues of F are within
the unit circle, which indicates that the SSM is stationary. However, the muskrat series (Y1) is reported to
be difference stationary. The estimated parameters are almost identical to those of the VAR(1) estimates.
Refer to Harvey (1989). Finally, multistep forecasts of yt are computed by using the KALCVF call. Here is
the code:

call kalcvf(pred,vpred,filt,vfilt,y,15,a,f,b,h,var,z0,vz0);

The predicted values of the state vector zt and their standard errors are shown in Output 13.3.3. Here is the
code:

title 'SSM Estimation using EM Algorithm';


data one;
input y1 y2 @@;
datalines;
0.10609 0.16794 -0.16852 0.06242 -0.23700 -0.13344
-0.18022 -0.50616 0.18094 -0.37943 0.65983 -0.40132
0.65235 0.08789 0.21594 0.23877 -0.11515 0.40043
-0.00067 0.37758 -0.00387 0.55735 -0.25202 0.34444
-0.65011 -0.02749 -0.53646 -0.41519 -0.08462 0.02591
-0.05640 -0.11348 0.26630 0.20544 0.03641 0.16331
-0.26030 -0.01498 -0.03995 0.09657 0.33612 0.31096
-0.11672 0.30681 -0.69775 -0.69351 -0.07569 -0.56212
0.36149 -0.36799 0.42341 -0.24725 0.26721 0.04478
-0.00363 0.21637 0.08333 0.30188 -0.22480 0.29493
-0.13728 0.35463 -0.12698 0.05490 -0.18770 -0.52573
0.34741 -0.49541 0.54947 -0.26250 0.57423 -0.21936
0.57493 -0.12012 0.28188 0.63556 -0.58438 0.27067
-0.50236 0.10386 -0.60766 0.36748 -1.04784 -0.33493
-0.68857 -0.46525 -0.11450 -0.63648 0.22005 -0.26335
0.36533 0.07017 -0.00151 -0.04977 0.03740 -0.02411
0.22438 0.30790 -0.16196 0.41050 -0.12862 0.34929
0.08448 -0.14995 0.17945 -0.03320 0.37502 0.02953
0.95727 0.24090 0.86188 0.41096 0.39464 0.24157
0.53794 0.29385 0.13054 0.39336 -0.39138 -0.00323
-1.23825 -0.56953 -0.66286 -0.72363
;
run;
316 F Chapter 13: Time Series Analysis and Examples

proc iml;
start lik(y,pred,vpred,h,rt);
n = nrow(y);
nz = ncol(h);
et = y - pred*h`;
sum1 = 0;
sum2 = 0;
do i = 1 to n;
vpred_i = vpred[(i-1)*nz+1:i*nz,];
et_i = et[i,];
ft = h*vpred_i*h` + rt;
sum1 = sum1 + log(det(ft));
sum2 = sum2 + et_i*inv(ft)*et_i`;
end;
return(sum1+sum2);
finish;

use one;
read all into y var {y1 y2};
close one;
/*-- mean adjust series --*/
t = nrow(y);
ny = ncol(y);
nz = ny;
f = i(nz);
h = i(ny);

/*-- observation noise variance is diagonal --*/


rt = 1e-5#i(ny);

/*-- transition noise variance --*/


vt = .1#i(nz);
a = j(nz,1,0);
b = j(ny,1,0);
myu = j(nz,1,0);
sigma = .1#i(nz);
converge = 0;
logl0 = 0.0;
do iter = 1 to 100 while( converge = 0 );

/*--- construct big cov matrix --*/


var = ( vt || j(nz,ny,0) ) //
( j(ny,nz,0) || rt );

/*-- initial values are changed --*/


z0 = myu` * f`;
vz0 = f * sigma * f` + vt;

/*-- filtering to get one-step prediction and filtered value --*/


call kalcvf(pred,vpred,filt,vfilt,y,0,a,f,b,h,var,z0,vz0);

/*-- smoothing using one-step prediction values --*/


Example 13.3: Kalman Filtering: SSM Estimation With the EM Algorithm F 317

call kalcvs(sm,vsm,y,a,f,b,h,var,pred,vpred);

/*-- compute likelihood values --*/


logl = lik(y,pred,vpred,h,rt);

/*-- store old parameters and function values --*/


myu0 = myu;
f0 = f;
vt0 = vt;
rt0 = rt;
diflog = logl - logl0;
logl0 = logl;
itermat = itermat // ( iter || logl0 || shape(f0,1) || myu0` );

/*-- obtain P*(t) to get P_T_0 and Z_T_0 --*/


/*-- these values are not usually needed --*/
/*-- See Harvey (1989 p154) or Shumway (1988, p177) --*/
jt1 = sigma * f` * inv(vpred[1:nz,]);
p_t_0 = sigma + jt1*(vsm[1:nz,] - vpred[1:nz,])*jt1`;
z_t_0 = myu + jt1*(sm[1,]` - pred[1,]`);
p_t1_t = vpred[(t-1)*nz+1:t*nz,];
p_t1_t1 = vfilt[(t-2)*nz+1:(t-1)*nz,];
kt = p_t1_t*h`*inv(h*p_t1_t*h`+rt);

/*-- obtain P_T_TT1. See Shumway (1988, p180) --*/


p_t_ii1 = (i(nz)-kt*h)*f*p_t1_t1;
st0 = vsm[(t-1)*nz+1:t*nz,] + sm[t,]`*sm[t,];
st1 = p_t_ii1 + sm[t,]`*sm[t-1,];
st00 = p_t_0 + z_t_0 * z_t_0`;
cov = (y[t,]` - h*sm[t,]`) * (y[t,]` - h*sm[t,]`)` +
h*vsm[(t-1)*nz+1:t*nz,]*h`;
do i = t to 2 by -1;
p_i1_i1 = vfilt[(i-2)*nz+1:(i-1)*nz,];
p_i1_i = vpred[(i-1)*nz+1:i*nz,];
jt1 = p_i1_i1 * f` * inv(p_i1_i);
p_i1_i = vpred[(i-2)*nz+1:(i-1)*nz,];
if ( i > 2 ) then
p_i2_i2 = vfilt[(i-3)*nz+1:(i-2)*nz,];
else
p_i2_i2 = sigma;
jt2 = p_i2_i2 * f` * inv(p_i1_i);
p_t_i1i2 = p_i1_i1*jt2` + jt1*(p_t_ii1 - f*p_i1_i1)*jt2`;
p_t_ii1 = p_t_i1i2;
temp = vsm[(i-2)*nz+1:(i-1)*nz,];
sm1 = sm[i-1,]`;
st0 = st0 + ( temp + sm1 * sm1` );
if ( i > 2 ) then
st1 = st1 + ( p_t_ii1 + sm1 * sm[i-2,]);
else st1 = st1 + ( p_t_ii1 + sm1 * z_t_0`);
st00 = st00 + ( temp + sm1 * sm1` );
cov = cov + ( h * temp * h` +
(y[i-1,]` - h * sm1)*(y[i-1,]` - h * sm1)` );
end;
318 F Chapter 13: Time Series Analysis and Examples

/*-- M-step: update the parameters --*/


myu = z_t_0;
f = st1 * inv(st00);
vt = (st0 - st1 * inv(st00) * st1`)/t;
rt = cov / t;

/*-- check convergence --*/


if ( max(abs((myu - myu0)/(myu0+1e-6))) < 1e-2 &
max(abs((f - f0)/(f0+1e-6))) < 1e-2 &
max(abs((vt - vt0)/(vt0+1e-6))) < 1e-2 &
max(abs((rt - rt0)/(rt0+1e-6))) < 1e-2 &
abs((diflog)/(logl0+1e-6)) < 1e-3 ) then
converge = 1;
end;

reset noname;
colnm = {'Iter' '-2*log L' 'F11' 'F12' 'F21' 'F22'
'MYU11' 'MYU22'};
print itermat[colname=colnm format=8.4];

eval = eigval(f0);
colnm = {'Real' 'Imag' 'MOD'};
eval = eval || sqrt((eval#eval)[,+]);
print eval[colname=colnm];
var = ( vt || j(nz,ny,0) ) //
( j(ny,nz,0) || rt );

/*-- initial values are changed --*/


z0 = myu` * f`;
vz0 = f * sigma * f` + vt;
free itermat;

/*-- multistep prediction --*/


call kalcvf(pred,vpred,filt,vfilt,y,15,a,f,b,h,var,z0,vz0);
do i = 1 to 15;
itermat = itermat // ( i || pred[t+i,] ||
sqrt(vecdiag(vpred[(t+i-1)*nz+1:(t+i)*nz,]))` );
end;
colnm = {'n-Step' 'Z1_T_n' 'Z2_T_n' 'SE_Z1' 'SE_Z2'};
print itermat[colname=colnm];
quit;
Example 13.4: Diffuse Kalman Filtering F 319

Output 13.3.1 Iteration History

SSM Estimation using EM Algorithm

Iter -2*log L F11 F12 F21 F22 MYU11 MYU22

1.0000 -154.010 1.0000 0.0000 0.0000 1.0000 0.0000 0.0000


2.0000 -237.962 0.7952 -0.6473 0.3263 0.5143 0.0530 0.0840
3.0000 -238.083 0.7967 -0.6514 0.3259 0.5142 0.1372 0.0977
4.0000 -238.126 0.7966 -0.6517 0.3259 0.5139 0.1853 0.1159
5.0000 -238.143 0.7964 -0.6519 0.3257 0.5138 0.2143 0.1304
6.0000 -238.151 0.7963 -0.6520 0.3255 0.5136 0.2324 0.1405
7.0000 -238.153 0.7962 -0.6520 0.3254 0.5135 0.2438 0.1473
8.0000 -238.155 0.7962 -0.6521 0.3253 0.5135 0.2511 0.1518
9.0000 -238.155 0.7962 -0.6521 0.3253 0.5134 0.2558 0.1546
10.0000 -238.155 0.7961 -0.6521 0.3253 0.5134 0.2588 0.1565

Output 13.3.2 Eigenvalues of F Matrix

Real Imag MOD

0.6547534 0.438317 0.7879237


0.6547534 -0.438317 0.7879237

Output 13.3.3 Multistep Prediction

n-Step Z1_T_n Z2_T_n SE_Z1 SE_Z2

1 -0.055792 -0.587049 0.2437666 0.237074


2 0.3384325 -0.319505 0.3140478 0.290662
3 0.4778022 -0.053949 0.3669731 0.3104052
4 0.4155731 0.1276996 0.4021048 0.3218256
5 0.2475671 0.2007098 0.419699 0.3319293
6 0.0661993 0.1835492 0.4268943 0.3396153
7 -0.067001 0.1157541 0.430752 0.3438409
8 -0.128831 0.0376316 0.4341532 0.3456312
9 -0.127107 -0.022581 0.4369411 0.3465325
10 -0.086466 -0.052931 0.4385978 0.3473038
11 -0.034319 -0.055293 0.4393282 0.3479612
12 0.0087379 -0.039546 0.4396666 0.3483717
13 0.0327466 -0.017459 0.439936 0.3485586
14 0.0374564 0.0016876 0.4401753 0.3486415
15 0.0287193 0.0130482 0.440335 0.3487034

Example 13.4: Diffuse Kalman Filtering

The nonstationary SSM is simulated to analyze the diffuse Kalman filter call KALDFF. The transition
equation is generated by using the following formula:
320 F Chapter 13: Time Series Analysis and Examples

      
z1t 1:5 0:5 z1t 1 1t
D C
z2t 1:0 0:0 z2t 1 0

where 1t  N.0; 1/. The transition equation is nonstationary since the transition matrix F has one unit
root. Here is the code:

proc iml;
z_1 = 0; z_2 = 0;
do i = 1 to 30;
z = 1.5*z_1 - .5*z_2 + rannor(1234567);
z_2 = z_1;
z_1 = z;
x = z + .8*rannor(1234578);
if ( i > 10 ) then y = y // x;
end;

The KALDFF and KALCVF calls produce one-step prediction, and the result shows that two predictions
coincide after the fifth observation (Output 13.4.1). Here is the code:

t = nrow(y);
h = { 1 0 };
f = { 1.5 -.5, 1 0 };
rt = .64;
vt = diag({1 0});
ny = nrow(h);
nz = ncol(h);
nb = nz;
nd = nz;
a = j(nz,1,0);
b = j(ny,1,0);
int = j(ny+nz,nb,0);
coef = f // h;
var = ( vt || j(nz,ny,0) ) //
( j(ny,nz,0) || rt );
intd = j(nz+nb,1,0);
coefd = i(nz) // j(nb,nd,0);
at = j(t*nz,nd+1,0);
mt = j(t*nz,nz,0);
qt = j(t*(nd+1),nd+1,0);
n0 = -1;
call kaldff(kaldff_p,dvpred,initial,s2,y,0,int,
coef,var,intd,coefd,n0,at,mt,qt);
call kalcvf(kalcvf_p,vpred,filt,vfilt,y,0,a,f,b,h,var);
print kalcvf_p kaldff_p;
Example 13.4: Diffuse Kalman Filtering F 321

Output 13.4.1 Diffuse Kalman Filtering

Diffuse Kalman Filtering

kalcvf_p kaldff_p

0 0 0 0
1.441911 0.961274 1.1214871 0.9612746
-0.882128 -0.267663 -0.882138 -0.267667
-0.723156 -0.527704 -0.723158 -0.527706
1.2964969 0.871659 1.2964968 0.8716585
-0.035692 0.1379633 -0.035692 0.1379633
-2.698135 -1.967344 -2.698135 -1.967344
-5.010039 -4.158022 -5.010039 -4.158022
-9.048134 -7.719107 -9.048134 -7.719107
-8.993153 -8.508513 -8.993153 -8.508513
-11.16619 -10.44119 -11.16619 -10.44119
-10.42932 -10.34166 -10.42932 -10.34166
-8.331091 -8.822777 -8.331091 -8.822777
-9.578258 -9.450848 -9.578258 -9.450848
-6.526855 -7.241927 -6.526855 -7.241927
-5.218651 -5.813854 -5.218651 -5.813854
-5.01855 -5.291777 -5.01855 -5.291777
-6.5699 -6.284522 -6.5699 -6.284522
-4.613301 -4.995434 -4.613301 -4.995434
-5.057926 -5.09007 -5.057926 -5.09007

The likelihood function for the diffuse Kalman filter under the finite initial covariance matrix is written
T
1 # X
.y/ D y log.O 2 / C log.jDt j/
2
tD1

where y. #/ is the dimension of the matrix .y01 ;    ; y0T /0 . The likelihood function for the diffuse Kalman
filter under the diffuse initial covariance matrix . ! 1/ is computed as .y/ 12 log.jSj/, where the
S matrix is the upper N  N matrix of Qt . Output 13.4.2 displays the log likelihood and the diffuse log
likelihood. Here is the code:

d = 0;
do i = 1 to t;
dt = h*mt[(i-1)*nz+1:i*nz,]*h` + rt;
d = d + log(det(dt));
end;
s = qt[(t-1)*(nd+1)+1:t*(nd+1)-1,1:nd];
log_l = -(t*log(s2) + d)/2;
dff_logl = log_l - log(det(s))/2;
print log_l dff_logl;
322 F Chapter 13: Time Series Analysis and Examples

Output 13.4.2 Diffuse Likelihood Function

log_l

Log L -11.42547

dff_logl

Diffuse Log L -9.457596

Vector Time Series Analysis Subroutines

Vector time series analysis involves more than one dependent time series variable, with possible interrela-
tions or feedback between the dependent variables.
The VARMASIM subroutine generates various time series from the underlying VARMA models. Simula-
tions of time series with known VARMA structure offer learning and developing vector time series analysis
skills.
The VARMACOV subroutine provides the pattern of the autocovariance function of VARMA models and
helps to identify and fit a proper model.
The VARMALIK subroutine provides the log-likelihood of a VARMA model and helps to obtain estimates
of the parameters of a regression model.
The following subroutines are supported:

VARMACOV computes the theoretical cross covariances for a multivariate ARMA model
VARMALIK evaluates the log-likelihood function for a multivariate ARMA model
VARMASIM generates a multivariate ARMA time series
VNORMAL generates a multivariate normal random series
VTSROOT computes the characteristic roots of a multivariate ARMA model

Getting Started

Stationary VAR Process

Generate the process following the first-order stationary vector autoregressive model with zero mean
   
1:2 0:5 1:0 0:5
yt D yt 1 C t with D
0:6 0:3 0:5 1:25
Getting Started F 323

The following statements compute the roots of characteristic function, compute the five lags of cross-
covariance matrices, generate 100 observations simulated data, and evaluate the log-likelihood function
of the VAR(1) model:

proc iml;
/* Stationary VAR(1) model */
sig = {1.0 0.5, 0.5 1.25};
phi = {1.2 -0.5, 0.6 0.3};
call varmasim(yt,phi) sigma=sig n=100 seed=3243;
call vtsroot(root,phi);
print root;
call varmacov(crosscov,phi) sigma=sig lag=5;
lag = {'0','','1','','2','','3','','4','','5',''};
print lag crosscov;
call varmalik(lnl,yt,phi) sigma=sig;
print lnl;

Output 13.4.3 Plot of Generated VAR(1) Process (VARMASIM)

The stationary VAR(1) processes show in Figure 13.4.3.


324 F Chapter 13: Time Series Analysis and Examples

Output 13.4.4 Roots of VAR(1) Model (VTSROOT)

root

0.75 0.3122499 0.8124038 0.3945069 22.603583


0.75 -0.31225 0.8124038 -0.394507 -22.60358

In Figure 13.4.4, the first column is the real part (R) of the root of the characteristic function and the second
one is the imaginary part (I ). The third column is the modulus, the squared root of R2 C I 2 . The fourth
column is T an 1 .I =R/ and the last one is the degree. Since moduli are less than one from the third column,
the series is obviously stationary.

Output 13.4.5 Cross-covariance Matrices of VAR(1) Model (VARMACOV)

lag crosscov

0 5.3934173 3.8597124
3.8597124 5.0342051
1 4.5422445 4.3939641
2.1145523 3.826089
2 3.2537114 4.0435359
0.6244183 2.4165581
3 1.8826857 3.1652876
-0.458977 1.0996184
4 0.676579 2.0791977
-1.100582 0.0544993
5 -0.227704 1.0297067
-1.347948 -0.643999

In each matrix in Figure 13.4.5, the diagonal elements are corresponding to the autocovariance functions of
each time series. The off-diagonal elements are corresponding to the cross-covariance functions of between
two series.
Output 13.4.6 Log-Likelihood function of VAR(1) Model (VARMALIK)

lnl

-113.4708
2.5058678
224.43567

In Figure 13.4.6, the first row is the value of log-likelihood function; the second row is the sum of log
determinant of the innovation variance; the last row is the weighted sum of squares of residuals.
Getting Started F 325

Nonstationary VAR Process

Generate the process following the error correction model with a cointegrated rank of 1:
 
0:4
.1 B/yt D .1 2/yt 1 C t
0:1

with
 
100 0
D and y0 D 0
0 100

The following statements compute the roots of characteristic function and generate simulated data.

proc iml;
/* Nonstationary model */
sig = 100*i(2);
phi = {0.6 0.8, 0.1 0.8};
call varmasim(yt,phi) sigma=sig n=100 seed=1324;
call vtsroot(root,phi);
print root;
326 F Chapter 13: Time Series Analysis and Examples

Output 13.4.7 Plot of Generated Nonstationary Vector Process (VARMASIM)

The nonstationary processes are shown in Figure 13.4.7 and have a comovement.

Output 13.4.8 Roots of Nonstationary VAR(1) Model (VTSROOT)

root

1 0 1 0 0
0.4 0 0.4 0 0

In Figure 13.4.8, the first column is the real part (R) of the root of the characteristic function and the second
one is the imaginary part (I ). The third column is the modulus, the squared root of R2 C I 2 . The fourth
column is T an 1 .I =R/ and the last one is the degree. Since the moduli are greater than equal to one from
the third column, the series is obviously nonstationary.
Syntax F 327

Syntax

CALL VARMACOV (cov, phi, theta, sigma < , p, q, lag >) ;

CALL VARMALIK (lnl, series, phi, theta, sigma < , p, q, opt >) ;

CALL VARMASIM (series, phi, theta, mu, sigma, n < , p, q, initial, seed >) ;

CALL VNORMAL (series, mu, sigma, n < , seed >) ;

CALL VTSROOT root, phi, theta < , p, q >) ;

Fractionally Integrated Time Series Analysis

This section describes subroutines related to fractionally integrated time series analysis. The phenomenon
of long memory can be observed in hydrology, finance, economics, and so on. Unlike with a stationary
process, the correlations between observations of a long memory series are slowly decaying to zero.
The following subroutines are supported:

FARMACOV computes the autocovariance function for a fractionally integrated ARMA model.
FARMAFIT estimates the parameters for a fractionally integrated ARMA model.
FARMALIK computes the log-likelihood function for a fractionally integrated ARMA model.
FARMASIM generates a fractionally integrated ARMA process.
FDIF computes a fractionally differenced process.

Getting Started

The fractional differencing enables the degree of differencing d to take any real value rather than being
restricted to integer values. The fractionally differenced processes are capable of modeling long-term per-
sistence. The process

.1 B/d yt D t

is known as a fractional Gaussian noise process or an ARFIMA(0; d; 0) process, where d 2


. 1; 1/ackslashf0g, t is a white noise process with mean zero and variance 2 , and B is the back-
shift operator such that B j yt D yt j . The extension of an ARFIMA(0; d; 0) model combines fractional
differencing with an ARMA(p; q) model, known as an ARFIMA(p; d; q) model.
Consider an ARFIMA(0; 0:4; 0) represented as .1 B/0:4 yt D t where t  i id N.0; 2/. With the
following statements you can
328 F Chapter 13: Time Series Analysis and Examples

 generate the simulated 300 observations data

 obtain the fractionally differenced data

 compute the autocovariance function

 compute the log-likelihood function

 fit a fractionally integrated time series model to the data

proc iml;
/* ARFIMA(0,0.4,0) */
lag = (0:12)`;
call farmacov(autocov_D_IS_04, 0.4);
call farmacov(D_IS_005, 0.05);
print lag autocov_D_IS_04 D_IS_005;
d = 0.4;
call farmasim(yt, d) n=300 sigma=2 seed=5345;
call fdif(zt, yt, d);
*print zt;
call farmalik(lnl, yt, d);
print lnl;
call farmafit(d, ar, ma, sigma, yt);
print d sigma;
Getting Started F 329

Output 13.4.9 Plot of Generated ARFIMA(0,0.4,0) Process (FARMASIM)

The FARMASIM function generates the data shown in Figure 13.4.9.


330 F Chapter 13: Time Series Analysis and Examples

Output 13.4.10 Plot of Fractionally Differenced Process (FDIF)

The FDIF function creates the fractionally differenced process. Figure 13.4.10 shows a white noise series.

Output 13.4.11 Autocovariance Functions of ARFIMA(0,0.4,0) and ARFIMA(0,0.05,0) Models


(FARMACOV)

lag autocov_D_IS_04 D_IS_005

0 2.0700983 1.0044485
1 1.3800656 0.0528657
2 1.2075574 0.0284662
3 1.1146683 0.0197816
4 1.0527423 0.0152744
5 1.0069709 0.0124972
6 0.9710077 0.0106069
7 0.9415832 0.0092333
8 0.9168047 0.008188
9 0.8954836 0.0073647
10 0.8768277 0.0066985
11 0.8602838 0.006148
12 0.8454513 0.0056849
Syntax F 331

The first column is the autocovariance function of the ARFIMA(0,0.4,0) model, and the second column is
the autocovariance function of the ARFIMA(0,0.05,0) model. The first column decays to zero more slowly
than the second column.
Output 13.4.12 Log-Likelihood Function of ARFIMA(0,0.4,0) Model (FARMALIK)

lnl

-101.0599
.
.

The first row value is the log-likelihood function of the ARFIMA(0,0.4,0) model. Because the default
option of the estimates method is the conditional sum of squares, the last two rows of Figure 13.4.12 contain
missing values.

Output 13.4.13 Parameter Estimation of ARFIMA(0,0.4,0) Model (FARMAFIT)

d sigma

0.386507 1.9610507

The final estimates of the parameters are d D 0:387 and  2 D 1:96, while the true values of the data
generating process are d D 0:4 and  2 D 2.

Syntax

CALL FARMACOV (cov, d < , phi, theta, sigma, p, q, lag >) ;

CALL FARMAFIT (d, phi, theta, sigma, series < , p, q, opt >) ;

CALL FARMALIK (lnl, series, d < , phi, theta, sigma, p, q, opt >) ;

CALL FARMASIM (series, d < , phi, theta, mu, sigma, n, p, q, initial, seed >) ;

CALL FDIF (out, series, d) ;

References
Afifi, A. A. and Elashoff, R. M. (1967), Missing Observations in Multivariate Statistics II. Point Estimation
in Simple Linear Regression, Journal of the American Statistical Association, 62, 1029.

Akaike, H. (1974), A New Look at Statistical Model Identification, IEEE Transactions on Automatic
Control, 19, 716723.
332 F Chapter 13: Time Series Analysis and Examples

Akaike, H. (1977), On Entropy Maximization Principle, in P. R. Krishnaiah, ed., Applications of Statistics,


2741, Amsterdam: North Holland Publishing.

Akaike, H. (1978a), A Bayesian Analysis of the Minimum AIC Procedure, Ann. Inst. Statist. Math., 30,
914.

Akaike, H. (1978b), Time Series Analysis and Control through Parametric Models, in D. F. Findley, ed.,
Applied Time Series Analysis, 123, New York: Academic Press.

Akaike, H. (1979), A Bayesian Extension of the Minimum AIC Procedure of Autoregressive Model Fit-
ting, Biometrika, 66, 237242.

Akaike, H. (1980a), Likelihood and the Bayes Procedure, in J. M. Bernardo, M. H. DeGroot, D. V.


Lindley, and M. Smith, eds., Bay Statistics, 143166, Valencia, Spain: University Press.

Akaike, H. (1980b), Seasonal Adjustment by a Bayesian Modeling, Journal of Time Series Analysis, 1,
113.

Akaike, H. (1981), Likelihood of a Model and Information Criteria, Journal of Econometrics, 16, 314.

Akaike, H. (1986), The Selection of Smoothness Priors for Distributed Lag Estimation, in P. Goel and
A. Zellner, eds., Bayesian Inference and Decision Techniques, 109118, Elsevier Science.

Akaike, H. and Ishiguro, M. (1980), Trend Estimation with Missing Observations, Ann. Inst. Statist.
Math., 32, 481488.

Akaike, H. and Nakagawa, T. (1988), Statistical Analysis and Control of Dynamic Systems, Tokyo: KTK
Scientific.

Anderson, T. W. (1971), The Statistical Analysis of Time Series, New York: John Wiley & Sons.

Ansley, C. F. (1980), Computation of the Theoretical Autocovariance Function for a Vector ARMA Pro-
cess, Journal of Statistical Computation and Simulation, 12, 1524.

Ansley, C. F. and Kohn, R. (1986), A Note on Reparameterizing a Vector Autoregressive Moving Average
Model to Enforce Stationary, Journal of Statistical Computation and Simulation, 24, 99106.

Brockwell, P. J. and Davis, R. A. (1991), Time Series: Theory and Methods, Second Edition, New York:
Springer-Verlag.

Chung, C. F. (1996), A Generalized Fractionally Integrated ARMA Process, Journal of Time Series Anal-
ysis, 2, 111140.

De Jong, P. (1991), The Diffuse Kalman Filter, Annals of Statistics, 19, 10731083.

Doan, T., Litterman, R., and Sims, C. (1984), Forecasting and Conditional Projection Using Realistic Prior
Distributions, Econometric Reviews.

Gersch, W. and Kitagawa, G. (1983), The Prediction of Time Series with Trends and Seasonalities, Journal
of Business and Economic Statistics, 1, 253264.

Geweke, J. and Porter-Hudak, S. (1983), The Estimation and Application of Long Memory Time Series
Models, Journal of Time Series Analysis, 4, 221238.
References F 333

Granger, C. W. J. and Joyeux, R. (1980), An Introduction to Long Memory Time Series Models and
Fractional Differencing, Journal of Time Series Analysis, 1, 1539.

Harvey, A. C. (1989), Forecasting, Structural Time Series Models and the Kalman Filter, Cambridge: Cam-
bridge University Press.

Hosking, J. R. M. (1981), Fractional Differencing, Biometrika, 68, 165176.

Ishiguro, M. (1984), Computationally Efficient Implementation of a Bayesian Seasonal Adjustment Pro-


cedure, Journal of Time Series Analysis, 5, 245253.

Ishiguro, M. (1987), TIMSAC-84: A New Time Series Analysis and Control Package, Proceedings of
American Statistical Association: Business and Economic Section.

Jones, R. H. and Brelsford, W. M. (1967), Time Series with Periodic Structure, Biometrika, 54, 403408.

Kitagawa, G. (1981), A Nonstationary Time Series Model and Its Fitting by a Recursive Filter, Journal of
Time Series Analysis, 2, 103116.

Kitagawa, G. (1983), Changing Spectrum Estimation, Journal of Sound and Vibration, 89, 433445.

Kitagawa, G. and Akaike, H. (1978), A Procedure for the Modeling of Non-stationary Time Series, Ann.
Inst. Statist. Math., 30, 351363.

Kitagawa, G. and Akaike, H. (1981), On TIMSAC-78, in D. F. Findley, ed., Applied Time Series Analysis
II, 499547, New York: Academic Press.

Kitagawa, G. and Akaike, H. (1982), A Quasi Bayesian Approach to Outlier Detection, Ann. Inst. Statist.
Math., 34, 389398.

Kitagawa, G. and Gersch, W. (1984), A Smoothness Priors-State Space Modeling of Time Series with
Trend and Seasonality, Journal of the American Statistical Association, 79, 378389.

Kitagawa, G. and Gersch, W. (1985a), A Smoothness Priors Long AR Model Method for Spectral Estima-
tion, IEEE Transactions on Automatic Control, 30, 5765.

Kitagawa, G. and Gersch, W. (1985b), A Smoothness Priors Time-Varying AR Coefficient Modeling of


Nonstationary Covariance Time Series, IEEE Transactions on Automatic Control, 30, 4856.

Kohn, R. and Ansley, C. F. (1982), A Note on Obtaining the Theoretical Autocovariances of an ARMA
Process, Journal of Statistical Computation and Simulation, 15, 273283.

Li, W. K. and McLeod, A. I. (1986), Fractional Time Series Modeling, Biometrika, 73, 217221.

Ltkepohl, H. (1993), Introduction to Multiple Time Series Analysis, Berlin: Springer-Verlag.

Mittnik, S. (1990), Computation of Theoretical Autocovariance Matrices of Multivariate Autoregressive


Moving Average Time Series, Journal of Royal Statistical Society.

Nelson, C. R. and Plosser, C. I. (1982), Trends and Random Walks in Macroeconomic Time Series: Some
Evidence and Implications, Journal of Monetary Economics, 10, 139162.

Pagano, M. (1978), On Periodic and Multiple Autoregressions, The Annals of Statistics, 6, 13101317.
334 F Chapter 13: Time Series Analysis and Examples

Reinsel, G. C. (1997), Elements of Multivariate Time Series Analysis, Second Edition, New York: Springer-
Verlag.

Sakamoto, Y., Ishiguro, M., and Kitagawa, G. (1986), Akaike Information Criterion Statistics, Tokyo: KTK
Scientific.

Shiller, R. J. (1973), A Distributed Lag Estimator Derived from Smoothness Priors, Econometrica, 41,
775788.

Shumway, R. H. (1988), Applied Statistical Time Series Analysis, Englewood Cliffs, NJ: Prentice-Hall.

Sowell, F. (1992), Maximum Likelihood Estimation of Stationary Univariate Fractionally Integrated Time
Series Models, Journal of Econometrics, 53, 165188.

Tamura, Y. H. (1986), An Approach to the Nonstationary Process Analysis, Ann. Inst. Statist. Math., 39,
227241.

Wei, W. W. S. (1990), Time Series Analysis: Univariate and Multivariate Methods, Redwood: Addison-
Wesley.

Whittaker, E. T. (1923), On a New Method of Graduation, Proceedings of the Edinborough Mathematical


Society, 41, 6375.

Whittaker, E. T. and Robinson, G. (1944), Calculus of Observation, Fourth Edition, London: Blackie & Son
Limited.

Zellner, A. (1971), An Introduction to Bayesian Inference in Econometrics, New York: John Wiley & Sons.
Chapter 14

Nonlinear Optimization Examples

Contents
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
Global versus Local Optima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
Kuhn-Tucker Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
Definition of Return Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
Objective Function and Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
Finite-Difference Approximations of Derivatives . . . . . . . . . . . . . . . . . . . . 352
Parameter Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
Options Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
Termination Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
Control Parameters Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
Printing the Optimization History . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
Nonlinear Optimization Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
Example 14.1: Chemical Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . 370
Example 14.2: Network Flow and Delay . . . . . . . . . . . . . . . . . . . . . . . . 374
Example 14.3: Compartmental Analysis . . . . . . . . . . . . . . . . . . . . . . . . 378
Example 14.4: MLEs for Two-Parameter Weibull Distribution . . . . . . . . . . . . . 387
Example 14.5: Profile-Likelihood-Based Confidence Intervals . . . . . . . . . . . . . 390
Example 14.6: Survival Curve for Interval Censored Data . . . . . . . . . . . . . . . 392
Example 14.7: A Two-Equation Maximum Likelihood Problem . . . . . . . . . . . . 398
Example 14.8: Time-Optimal Heat Conduction . . . . . . . . . . . . . . . . . . . . . 402
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406

Overview

The IML procedure offers a set of optimization subroutines for minimizing or maximizing a continuous
nonlinear function f D f .x/ of n parameters, where x D .x1 ; : : : ; xn /T . The parameters can be subject
to boundary constraints and linear or nonlinear equality and inequality constraints. The following set of
optimization subroutines is available:
336 F Chapter 14: Nonlinear Optimization Examples

NLPCG Conjugate Gradient Method


NLPDD Double Dogleg Method
NLPNMS Nelder-Mead Simplex Method
NLPNRA Newton-Raphson Method
NLPNRR Newton-Raphson Ridge Method
NLPQN (Dual) Quasi-Newton Method
NLPQUA Quadratic Optimization Method
NLPTR Trust-Region Method
The following subroutines are provided for solving nonlinear least squares problems:
NLPLM Levenberg-Marquardt Least Squares Method
NLPHQN Hybrid Quasi-Newton Least Squares Methods
A least squares problem is a special form of minimization problem where the objective function is defined
as a sum of squares of other (nonlinear) functions.
1
f .x/ D ff12 .x/ C    C fm2 .x/g
2
Least squares problems can usually be solved more efficiently by the least squares subroutines than by the
other optimization subroutines.
The following subroutines are provided for the related problems of computing finite difference approxima-
tions for first- and second-order derivatives and of determining a feasible point subject to boundary and
linear constraints:
NLPFDD Approximate Derivatives by Finite Differences
NLPFEA Feasible Point Subject to Constraints
Each optimization subroutine works iteratively. If the parameters are subject only to linear constraints, all
optimization and least squares techniques are feasible-point methods; that is, they move from feasible point
x .k/ to a better feasible point x .kC1/ by a step in the search direction s .k/ , k D 1; 2; 3; : : :. If you do not
provide a feasible starting point x .0/ , the optimization methods call the algorithm used in the NLPFEA
subroutine, which tries to compute a starting point that is feasible with respect to the boundary and linear
constraints.
The NLPNMS and NLPQN subroutines permit nonlinear constraints on parameters. For problems with
nonlinear constraints, these subroutines do not use a feasible-point method; instead, the algorithms begin
with whatever starting point you specify, whether feasible or infeasible.
Each optimization technique requires a continuous objective function f D f .x/, and all optimization sub-
routines except the NLPNMS subroutine require continuous first-order derivatives of the objective function
f . If you do not provide the derivatives of f , they are approximated by finite-difference formulas. You can
use the NLPFDD subroutine to check the correctness of analytical derivative specifications.
Most of the results obtained from the IML procedure optimization and least squares subroutines can also be
obtained by using the OPTMODEL procedure or the NLP procedure in SAS/OR software.
The advantages of the IML procedure are as follows:

 You can use matrix algebra to specify the objective function, nonlinear constraints, and their deriva-
tives in IML modules.
Getting Started F 337

 The IML procedure offers several subroutines that can be used to specify the objective function or
nonlinear constraints, many of which would be very difficult to write for the NLP procedure.

 You can formulate your own termination criteria by using the ptit module argument.

The advantages of the NLP procedure are as follows:

 Although identical optimization algorithms are used, the NLP procedure can be much faster because
of the interactive and more general nature of the IML product.

 Analytic first- and second-order derivatives can be computed with a special compiler.

 Additional optimization methods are available in the NLP procedure that do not fit into the framework
of this package.

 Data set processing is much easier than in the IML procedure. You can save results in output data sets
and use them in subsequent runs.

 The printed output contains more information.

Getting Started

Unconstrained Rosenbrock Function

The Rosenbrock function is defined as

1
f .x/ D f100.x2 x12 /2 C .1 x1 /2 g
2
1 2
D ff .x/ C f22 .x/g; x D .x1 ; x2 /
2 1

The minimum function value f  D f .x  / D 0 is at the point x  D .1; 1/.


The following code calls the NLPTR subroutine to solve the optimization problem:
The NLPTR is a trust-region optimization method. The F_ROSEN module represents the Rosenbrock
function, and the G_ROSEN module represents its gradient. Specifying the gradient can reduce the number
of function calls by the optimization subroutine. The optimization begins at the initial point x D . 1:2; 1/.
For more information about the NLPTR subroutine and its arguments, see the section NLPTR Call on
page 855. For details about the options vector, which is given by the OPTN vector in the preceding code,
see the section Options Vector on page 356.
A portion of the output produced by the NLPTR subroutine is shown in Figure 14.1.
338 F Chapter 14: Nonlinear Optimization Examples

Figure 14.1 NLPTR Solution to the Rosenbrock Problem

Optimization Start
Parameter Estimates
Gradient
Objective
N Parameter Estimate Function

1 X1 -1.200000 -107.800000
2 X2 1.000000 -44.000000

Value of Objective Function = 12.1

Trust Region Optimization

Without Parameter Scaling


CRP Jacobian Computed by Finite Differences

Parameter Estimates 2

Optimization Start

Active Constraints 0 Objective Function 12.1


Max Abs Gradient Element 107.8 Radius 1

Max Abs Trust


Rest Func Act Objective Obj Fun Gradient Region
Iter arts Calls Con Function Change Element Lambda Radius

1 0 2 0 2.36594 9.7341 2.3189 0 1.000


2 0 5 0 2.05926 0.3067 5.2875 0.385 1.526
3 0 8 0 1.74390 0.3154 5.9934 0 1.086
4 0 9 0 1.43279 0.3111 6.5134 0.918 0.372
5 0 10 0 1.13242 0.3004 4.9245 0 0.373
6 0 11 0 0.86905 0.2634 2.9302 0 0.291
7 0 12 0 0.66711 0.2019 3.6584 0 0.205
8 0 13 0 0.47959 0.1875 1.7354 0 0.208
9 0 14 0 0.36337 0.1162 1.7589 2.916 0.132
10 0 15 0 0.26903 0.0943 3.4089 0 0.270
11 0 16 0 0.16280 0.1062 0.6902 0 0.201
12 0 19 0 0.11590 0.0469 1.1456 0 0.316
13 0 20 0 0.07616 0.0397 0.8462 0.931 0.134
14 0 21 0 0.04873 0.0274 2.8063 0 0.276
15 0 22 0 0.01862 0.0301 0.2290 0 0.232
16 0 25 0 0.01005 0.00858 0.4553 0 0.256
17 0 26 0 0.00414 0.00590 0.4297 0.247 0.104
18 0 27 0 0.00100 0.00314 0.4323 0.0453 0.104
19 0 28 0 0.0000961 0.000906 0.1134 0 0.104
20 0 29 0 1.67873E-6 0.000094 0.0224 0 0.0569
21 0 30 0 6.9582E-10 1.678E-6 0.000336 0 0.0248
22 0 31 0 1.3128E-16 6.96E-10 1.977E-7 0 0.00314
Getting Started F 339

Figure 14.1 continued

Optimization Results

Iterations 22 Function Calls 32


Hessian Calls 23 Active Constraints 0
Objective Function 1.312814E-16 Max Abs Gradient Element 1.9773384E-7
Lambda 0 Actual Over Pred Change 0
Radius 0.003140192

ABSGCONV convergence criterion satisfied.

Optimization Results
Parameter Estimates
Gradient
Objective
N Parameter Estimate Function

1 X1 1.000000 0.000000198
2 X2 1.000000 -0.000000105

Value of Objective Function = 1.312814E-16

Since f .x/ D 12 ff12 .x/ C f22 .x/g, you can also use least squares techniques in this situation. The following
code calls the NLPLM subroutine to solve the problem. The output is shown in Figure 14.2.

Figure 14.2 NLPLM Solution Using the Least Squares Technique

Optimization Start
Parameter Estimates
Gradient
Objective
N Parameter Estimate Function

1 X1 -1.200000 -107.799999
2 X2 1.000000 -44.000000

Value of Objective Function = 12.1

Levenberg-Marquardt Optimization

Scaling Update of More (1978)


Gradient Computed by Finite Differences
CRP Jacobian Computed by Finite Differences

Parameter Estimates 2
Functions (Observations) 2

Optimization Start

Active Constraints 0 Objective Function 12.1


Max Abs Gradient Element 107.7999987 Radius 2626.5613171
340 F Chapter 14: Nonlinear Optimization Examples

Figure 14.2 continued

Actual
Max Abs Over
Rest Func Act Objective Obj Fun Gradient Pred
Iter arts Calls Con Function Change Element Lambda Change

1 0 4 0 2.18185 9.9181 17.4704 0.00804 0.964


2 0 6 0 1.59370 0.5881 3.7015 0.0190 0.988
3 0 7 0 1.32848 0.2652 7.0843 0.00830 0.678
4 0 8 0 1.03891 0.2896 6.3092 0.00753 0.593
5 0 9 0 0.78943 0.2495 7.2617 0.00634 0.486
6 0 10 0 0.58838 0.2011 7.8837 0.00462 0.393
7 0 11 0 0.34224 0.2461 6.6815 0.00307 0.524
8 0 12 0 0.19630 0.1459 8.3857 0.00147 0.469
9 0 13 0 0.11626 0.0800 9.3086 0.00016 0.409
10 0 14 0 0.0000396 0.1162 0.1781 0 1.000
11 0 15 0 2.4652E-30 0.000040 4.44E-14 0 1.000

Optimization Results

Iterations 11 Function Calls 16


Jacobian Calls 12 Active Constraints 0
Objective Function 2.46519E-30 Max Abs Gradient Element 4.440892E-14
Lambda 0 Actual Over Pred Change 1
Radius 0.0178062912

ABSGCONV convergence criterion satisfied.

Optimization Results
Parameter Estimates
Gradient
Objective
N Parameter Estimate Function

1 X1 1.000000 -4.44089E-14
2 X2 1.000000 2.220446E-14

Value of Objective Function = 2.46519E-30

The Levenberg-Marquardt least squares method, which is the method used by the NLPLM subroutine,
is a modification of the trust-region method for nonlinear least squares problems. The F_ROSEN module
represents the Rosenbrock function. Note that for least squares problems, the m functions f1 .x/; : : : ; fm .x/
are specified as elements of a vector; this is different from the manner in which f .x/ is specified for the
other optimization techniques. No derivatives are specified in the preceding code, so the NLPLM subroutine
computes finite-difference approximations. For more information about the NLPLM subroutine, see the
section NLPLM Call on page 836.
Getting Started F 341

Constrained Betts Function

The linearly constrained Betts function (Hock and Schittkowski 1981) is defined as

f .x/ D 0:01x12 C x22 100

The boundary constraints are

2  x1  50
50  x2  50

The linear constraint is

10x1 x2  10

The following code calls the NLPCG subroutine to solve the optimization problem. The infeasible initial
point x 0 D . 1; 1/ is specified, and a portion of the output is shown in Figure 14.3.
The NLPCG subroutine performs conjugate gradient optimization. It requires only function and gradient
calls. The F_BETTS module represents the Betts function, and since no module is defined to specify the gra-
dient, first-order derivatives are computed by finite-difference approximations. For more information about
the NLPCG subroutine, see the section NLPCG Call on page 826. For details about the constraint matrix,
which is represented by the CON matrix in the preceding code, see the section Parameter Constraints on
page 354.

Figure 14.3 NLPCG Solution to Betts Problem

Optimization Start
Parameter Estimates
Gradient Lower Upper
Objective Bound Bound
N Parameter Estimate Function Constraint Constraint

1 X1 6.800000 0.136000 2.000000 50.000000


2 X2 -1.000000 -2.000000 -50.000000 50.000000

Linear Constraints

1 59.00000 : 10.0000 <= + 10.0000 * X1 - 1.0000 *


X2

Parameter Estimates 2
Lower Bounds 2
Upper Bounds 2
Linear Constraints 1

Optimization Start

Active Constraints 0 Objective Function -98.5376


Max Abs Gradient Element 2
342 F Chapter 14: Nonlinear Optimization Examples

Figure 14.3 continued

Max Abs Slope


Rest Func Act Objective Obj Fun Gradient Step Search
Iter arts Calls Con Function Change Element Size Direc

1 0 3 0 -99.54682 1.0092 0.1346 0.502 -4.018


2 1 7 1 -99.96000 0.4132 0.00272 34.985 -0.0182
3 2 9 1 -99.96000 1.851E-6 0 0.500 -74E-7

Optimization Results

Iterations 3 Function Calls 10


Gradient Calls 9 Active Constraints 1
Objective Function -99.96 Max Abs Gradient Element 0
Slope of Search Direction -7.398365E-6

Optimization Results
Parameter Estimates
Gradient Active
Objective Bound
N Parameter Estimate Function Constraint

1 X1 2.000000 0.040000 Lower BC


2 X2 -1.24028E-10 0

Linear Constraints Evaluated at Solution

1 10.00000 = -10.0000 + 10.0000 * X1 - 1.0000 * X2

Since the initial point . 1; 1/ is infeasible, the subroutine first computes a feasible starting point. Con-
vergence is achieved after three iterations, and the optimal point is given to be x  D .2; 0/ with an optimal
function value of f  D f .x  / D 99:96. For more information about the printed output, see the section
Printing the Optimization History on page 369.

Rosen-Suzuki Problem

The Rosen-Suzuki problem is a function of four variables with three nonlinear constraints on the variables.
It is taken from problem 43 of Hock and Schittkowski (1981). The objective function is

f .x/ D x12 C x22 C 2x32 C x42 5x1 5x2 21x3 C 7x4

The nonlinear constraints are

0  8 x12 x22 x32 x42 x1 C x2 x3 C x4


0  10 x12 2x22 x32 2x42 C x1 C x4
0  5 2x12 x22 x32 2x1 C x2 C x4

Since this problem has nonlinear constraints, only the NLPQN and NLPNMS subroutines are available to
perform the optimization. The following code solves the problem with the NLPQN subroutine:
Getting Started F 343

The F_HS43 module specifies the objective function, and the C_HS43 module specifies the nonlinear con-
straints. The OPTN vector is passed to the subroutine as the OPT input argument. See the section Options
Vector on page 356 for more information. The value of OPTN[10] represents the total number of nonlinear
constraints, and the value of OPTN[11] represents the number of equality constraints. In the preceding code,
OPTN[10]=3 and OPTN[11]=0, which indicate that there are three constraints, all of which are inequality
constraints. In the subroutine calls, instead of separating missing input arguments with commas, you can
specify optional arguments with keywords, as in the CALL NLPQN statement in the preceding code. For
details about the CALL NLPQN statement, see the section NLPQN Call on page 847.
The initial point for the optimization procedure is x D .1; 1; 1; 1/, and the optimal point is x  D
.0; 1; 2; 1/, with an optimal function value of f .x  / D 44. Part of the output produced is shown in
Figure 14.4.

Figure 14.4 Solution to the Rosen-Suzuki Problem by the NLPQN Subroutine

Optimization Start
Parameter Estimates
Gradient Gradient
Objective Lagrange
N Parameter Estimate Function Function

1 X1 1.000000 -3.000000 -3.000000


2 X2 1.000000 -3.000000 -3.000000
3 X3 1.000000 -17.000000 -17.000000
4 X4 1.000000 9.000000 9.000000

Parameter Estimates 4
Nonlinear Constraints 3

Optimization Start

Objective Function -19 Maximum Constraint 0


Violation
Maximum Gradient of the 17
Lagran Func

Maximum
Gradient
Element
Maximum Predicted of the
Function Objective Constraint Function Step Lagrange
Iter Restarts Calls Function Violation Reduction Size Function

1 0 2 -41.88007 1.8988 13.6803 1.000 5.647


2 0 3 -48.83264 3.0280 9.5464 1.000 5.041
3 0 4 -45.33515 0.5452 2.6179 1.000 1.061
4 0 5 -44.08667 0.0427 0.1732 1.000 0.0297
5 0 6 -44.00011 0.000099 0.000218 1.000 0.00906
6 0 7 -44.00001 2.573E-6 0.000014 1.000 0.00219
7 0 8 -44.00000 9.118E-8 5.097E-7 1.000 0.00022
344 F Chapter 14: Nonlinear Optimization Examples

Figure 14.4 continued

Optimization Results

Iterations 7 Function Calls 9


Gradient Calls 9 Active Constraints 2
Objective Function -44.00000026 Maximum Constraint 9.1176306E-8
Violation
Maximum Projected Gradient 0.0002265341 Value Lagrange Function -44
Maximum Gradient of the 0.00022158 Slope of Search Direction -5.097332E-7
Lagran Func

Optimization Results
Parameter Estimates
Gradient Gradient
Objective Lagrange
N Parameter Estimate Function Function

1 X1 -0.000001248 -5.000002 -0.000012804


2 X2 1.000027 -2.999945 0.000222
3 X3 1.999993 -13.000027 -0.000054166
4 X4 -1.000003 4.999995 -0.000020681

In addition to the standard iteration history, the NLPQN subroutine includes the following information for
problems with nonlinear constraints:

 CONMAX is the maximum value of all constraint violations.

 PRED is the value of the predicted function reduction used with the GTOL and FTOL2 termination
criteria.

 ALFA is the step size of the quasi-Newton step.

 LFGMAX is the maximum element of the gradient of the Lagrange function.

Details

Global versus Local Optima

All the IML optimization algorithms converge toward local rather than global optima. The smallest local
minimum of an objective function is called the global minimum, and the largest local maximum of an
objective function is called the global maximum. Hence, the subroutines can occasionally fail to find the
1
global optimum. Suppose you have the function f .x/ D 27 .3x14 28x13 C 84x12 96x1 C 64/ C x22 , which
has a local minimum at f .1; 0/ D 1 and a global minimum at the point f .4; 0/ D 0.
Global versus Local Optima F 345

The following statements use two calls of the NLPTR subroutine to minimize the preceding function. The
first call specifies the initial point xa D .0:5; 1:5/, and the second call specifies the initial point xb D .3; 1/.
The first call finds the local optimum x  D .1; 0/, and the second call finds the global optimum x  D .4; 0/.

proc iml;
start F_GLOBAL(x);
f=(3*x[1]**4-28*x[1]**3+84*x[1]**2-96*x[1]+64)/27 + x[2]**2;
return(f);
finish F_GLOBAL;
xa = {.5 1.5};
xb = {3 -1};
optn = {0 2};
call nlptr(rca,xra,"F_GLOBAL",xa,optn);
call nlptr(rcb,xrb,"F_GLOBAL",xb,optn);
print xra xrb;

One way to find out whether the objective function has more than one local optimum is to run various
optimizations with a pattern of different starting points.
For a more mathematical definition of optimality, refer to the Kuhn-Tucker theorem in standard optimization
literature. Using rather nonmathematical language, a local minimizer x  satisfies the following conditions:

 There exists a small, feasible neighborhood of x  that does not contain any point x with a smaller
function value f .x/ < f .x  /.

 The vector of first derivatives (gradient) g.x  / D rf .x  / of the objective function f (projected
toward the feasible region) at the point x  is zero.

 The matrix of second derivatives G.x  / D r 2 f .x  / (Hessian matrix) of the objective function f
(projected toward the feasible region) at the point x  is positive definite.

A local maximizer has the largest value in a feasible neighborhood and a negative definite Hessian.
The iterative optimization algorithm terminates at the point x t , which should be in a small neighborhood (in
terms of a user-specified termination criterion) of a local optimizer x  . If the point x t is located on one or
more active boundary or general linear constraints, the local optimization conditions are valid only for the
feasible region. That is,

 the projected gradient, Z T g.x t /, must be sufficiently small

 the projected Hessian, Z T G.x t /Z, must be positive definite for minimization problems or negative
definite for maximization problems

If there are n active constraints at the point x t , the nullspace Z has zero columns and the projected Hessian
has zero rows and columns. A matrix with zero rows and columns is considered positive as well as negative
definite.
346 F Chapter 14: Nonlinear Optimization Examples

Kuhn-Tucker Conditions

The nonlinear programming (NLP) problem with one objective function f and m constraint functions ci ,
which are continuously differentiable, is defined as follows:

minimizef .x/; x 2 Rn ; subject to


ci .x/ D 0; i D 1; : : : ; me
ci .x/  0; i D me C 1; : : : ; m

In the preceding notation, n is the dimension of the function f .x/, and me is the number of equality con-
straints. The linear combination of objective and constraint functions
m
X
L.x; / D f .x/ i ci .x/
i D1

is the Lagrange function, and the coefficients i are the Lagrange multipliers.
If the functions f and ci are twice differentiable, the point x  is an isolated local minimizer of the NLP
problem, if there exists a vector  D .1 ; : : : ; m / that meets the following conditions:

 Kuhn-Tucker conditions
ci .x  / D 0; i D 1; : : : ; me
   
ci .x /  0; i  0; i ci .x / D 0; i D me C 1; : : : ; m
rx L.x  ;  / D 0

 second-order condition
Each nonzero vector y 2 Rn with

y T rx ci .x  / D 0i D 1; : : : ; me ; and 8i 2 me C 1; : : : ; mI i > 0

satisfies

y T rx2 L.x  ;  /y > 0

In practice, you cannot expect the constraint functions ci .x  / to vanish within machine precision, and
determining the set of active constraints at the solution x  might not be simple.

Definition of Return Codes

The return code, which is represented by the output parameter rc in the optimization subroutines, indicates
the reason for optimization termination. A positive value indicates successful termination, while a negative
value indicates unsuccessful termination. Table 14.1 gives the reason for termination associated with each
return code.
Objective Function and Derivatives F 347

Table 14.1 Summary of Return Codes


Code Reason for Optimization Termination
1 ABSTOL criterion satisfied (absolute F convergence)
2 ABSFTOL criterion satisfied (absolute F convergence)
3 ABSGTOL criterion satisfied (absolute G convergence)
4 ABSXTOL criterion satisfied (absolute X convergence)
5 FTOL criterion satisfied (relative F convergence)
6 GTOL criterion satisfied (relative G convergence)
7 XTOL criterion satisfied (relative X convergence)
8 FTOL2 criterion satisfied (relative F convergence)
9 GTOL2 criterion satisfied (relative G convergence)
10 n linear independent constraints are active at xr and none of them could be
released to improve the function value
-1 objective function cannot be evaluated at starting point
-2 derivatives cannot be evaluated at starting point
-3 objective function cannot be evaluated during iteration
-4 derivatives cannot be evaluated during iteration
-5 optimization subroutine cannot improve the function value (this is a very
general formulation and is used for various circumstances)
-6 there are problems in dealing with linearly dependent active constraints
(changing the LCSING value in the par vector can be helpful)
-7 optimization process stepped outside the feasible region and the algorithm
to return inside the feasible region was not successful (changing the LCEPS
value in the par vector can be helpful)
-8 either the number of iterations or the number of function calls is larger than
the prespecified values in the tc vector (MAXIT and MAXFU)
-9 this return code is temporarily not used (it is used in PROC NLP where it
indicates that more CPU than a prespecified value was used)
-10 a feasible starting point cannot be computed

Objective Function and Derivatives

The input argument fun refers to an IML module that specifies a function that returns f , a vector of length
m for least squares subroutines or a scalar for other optimization subroutines. The returned f contains the
values of the objective function (or the least squares functions) at the point x. Note that for least squares
problems, you must specify the number of function values, m, with the first element of the opt argument
to allocate memory for the return vector. All the modules that you can specify as input arguments (fun,
grd, hes, jac, nlc, jacnlc, and ptit) accept only a single input argument, x, which is the
parameter vector. Using the GLOBAL clause, you can provide more input arguments for these modules.
Refer to the section Numerical Considerations on page 378 for an example.
All the optimization algorithms assume that f is continuous inside the feasible region. For nonlinearly
constrained optimization, this is also required for points outside the feasible region. Sometimes the objec-
tive function cannot be computed for all points of the specified feasible region; for example, the function
specification might contain the SQRT or LOG function, which cannot be evaluated for negative arguments.
348 F Chapter 14: Nonlinear Optimization Examples

You must make sure that the function and derivatives of the starting point can be evaluated. There are two
ways to prevent large steps into infeasible regions of the parameter space during the optimization process:

 The preferred way is to restrict the parameter space by introducing more boundary and linear con-
straints. For example, the boundary constraint
xj >D 1E 10 prevents infeasible evaluations of log.xj /. If the function module takes the square
root or the log of an intermediate result, you can use nonlinear constraints to try to avoid infeasible
function evaluations. However, this might not ensure feasibility.

 Sometimes the preferred way is difficult to practice, in which case the function module can return a
missing value. This can force the optimization algorithm to reduce the step length or the radius of the
feasible region.

All the optimization techniques except the NLPNMS subroutine require continuous first-order derivatives of
the objective function f . The NLPTR, NLPNRA, and NLPNRR techniques also require continuous second-
order derivatives. If you do not provide the derivatives with the IML modules grd, hes, or jac,
they are automatically approximated by finite-difference formulas. Approximating first-order derivatives
by finite differences usually requires n additional calls of the function module. Approximating second-
order derivatives by finite differences using only function calls can be extremely computationally expensive.
Hence, if you decide to use the NLPTR, NLPNRA, or NLPNRR subroutines, you should specify at least
analytical first-order derivatives. Then, approximating second-order derivatives by finite differences requires
only n or 2n additional calls of the function and gradient modules.
For all input and output arguments, the subroutines assume that

 the number of parameters n corresponds to the number of columns. For example, x, the input argu-
ment to the modules, and g, the output argument returned by the grd module, are row vectors with
n entries, and G, the Hessian matrix returned by the hes module, must be a symmetric nn matrix.

 the number of functions, m, corresponds to the number of rows. For example, the vector f returned
by the fun module must be a column vector with m entries, and in least squares problems, the
Jacobian matrix J returned by the jac module must be an m  n matrix.

You can verify your analytical derivative specifications by computing finite-difference approximations of the
derivatives of f with the NLPFDD subroutine. For most applications, the finite-difference approximations
of the derivatives are very precise. Occasionally, difficult objective functions and zero x coordinates cause
problems. You can use the par argument to specify the number of accurate digits in the evaluation of the
objective function; this defines the step size h of the first- and second-order finite-difference formulas. See
the section Finite-Difference Approximations of Derivatives on page 352.
N OTE : For some difficult applications, the finite-difference approximations of derivatives that are generated
by default might not be precise enough to solve the optimization or least squares problem. In such cases,
you might be able to specify better derivative approximations by using a better approximation formula. You
can submit your own finite-difference approximations by using the IML module grd, hes, jac, or
jacnlc. See Example 14.3 for an illustration.
In many applications, calculations used in the computation of f can help compute derivatives at the same
point efficiently. You can save and reuse such calculations with the GLOBAL clause. As with many other
Objective Function and Derivatives F 349

optimization packages, the subroutines call the grd, hes, or jac modules only after a call of the fun
module.
The following statements specify modules for the function, gradient, and Hessian matrix of the Rosenbrock
problem:

proc iml;
start F_ROSEN(x);
y1 = 10. * (x[2] - x[1] * x[1]);
y2 = 1. - x[1];
f = .5 * (y1 * y1 + y2 * y2);
return(f);
finish F_ROSEN;

start G_ROSEN(x);
g = j(1,2,0.);
g[1] = -200.*x[1]*(x[2]-x[1]*x[1]) - (1.-x[1]);
g[2] = 100.*(x[2]-x[1]*x[1]);
return(g);
finish G_ROSEN;

start H_ROSEN(x);
h = j(2,2,0.);
h[1,1] = -200.*(x[2] - 3.*x[1]*x[1]) + 1.;
h[2,2] = 100.;
h[1,2] = -200. * x[1];
h[2,1] = h[1,2];
return(h);
finish H_ROSEN;

The following statements specify a module for the Rosenbrock function when considered as a least squares
problem. They also specify the Jacobian matrix of the least squares functions.

proc iml;
start F_ROSEN(x);
y = j(1,2,0.);
y[1] = 10. * (x[2] - x[1] * x[1]);
y[2] = 1. - x[1];
return(y);
finish F_ROSEN;

start J_ROSEN(x);
jac = j(2,2,0.);
jac[1,1] = -20. * x[1]; jac[1,2] = 10.;
jac[2,1] = -1.; jac[2,2] = 0.;
return(jac);
finish J_ROSEN;
350 F Chapter 14: Nonlinear Optimization Examples

Diagonal or Sparse Hessian Matrices

In the unconstrained or only boundary constrained case, the NLPNRA algorithm can take advantage of
diagonal or sparse Hessian matrices submitted by the hes module. If the Hessian matrix G of the objective
function f has a large proportion of zeros, you can save computer time and memory by specifying a sparse
Hessian of dimension nn  3 rather than a dense n  n Hessian. Each of the nn rows .i; j; g/ of the matrix
returned by the sparse Hessian module defines a nonzero element gij of the Hessian matrix. The row and
column location is given by i and j , and g gives the nonzero value. During the optimization process, only
the values g can be changed in each call of the Hessian module hes; the sparsity structure .i; j / must
be kept the same. That means that some of the values g can be zero for particular values of x. To allocate
sufficient memory before the first call of the Hessian module, you must specify the number of rows, nn, by
setting the ninth element of the opt argument.
Example 22 of Mor, Garbow, and Hillstrom (1981) illustrates the sparse Hessian module input. The objec-
tive function, which is the Extended Powells Singular Function, for n D 40 is a least squares problem:
1
f .x/ D ff12 .x/ C    C fm2 .x/g
2
with

f4i 3 .x/ D x4i 3 C 10x4i 2


p
f4i 2 .x/ D 5.x4i 1 x4i /
f4i D .x4i 2 2x4i 1 /2
1 .x/
p
f4i .x/ D 10.x4i 3 x4i /2

The function and gradient modules are as follows:

start f_nlp22(x);
n=ncol(x);
f = 0.;
do i=1 to n-3 by 4;
f1 = x[i] + 10. * x[i+1];
r2 = x[i+2] - x[i+3];
f2 = 5. * r2;
r3 = x[i+1] - 2. * x[i+2];
f3 = r3 * r3;
r4 = x[i] - x[i+3];
f4 = 10. * r4 * r4;
f = f + f1 * f1 + r2 * f2 + f3 * f3 + r4 * r4 * f4;
end;
f = .5 * f;
return(f);
finish f_nlp22;

start g_nlp22(x);
n=ncol(x);
g = j(1,n,0.);
do i=1 to n-3 by 4;
f1 = x[i] + 10. * x[i+1];
f2 = 5. * (x[i+2] - x[i+3]);
Objective Function and Derivatives F 351

r3 = x[i+1] - 2. * x[i+2];
f3 = r3 * r3;
r4 = x[i] - x[i+3];
f4 = 10. * r4 * r4;
g[i] = f1 + 2. * r4 * f4;
g[i+1] = 10. * f1 + 2. * r3 * f3;
g[i+2] = f2 - 4. * r3 * f3;
g[i+3] = -f2 - 2. * r4 * f4;
end;
return(g);
finish g_nlp22;

You can specify the sparse Hessian with the following module:

start hs_nlp22(x);
n=ncol(x);
nnz = 8 * (n / 4);
h = j(nnz,3,0.);
j = 0;
do i=1 to n-3 by 4;
f1 = x[i] + 10. * x[i+1];
f2 = 5. * (x[i+2] - x[i+3]);
r3 = x[i+1] - 2. * x[i+2];
f3 = r3 * r3;
r4 = x[i] - x[i+3];
f4 = 10. * r4 * r4;
j= j + 1; h[j,1] = i; h[j,2] = i;
h[j,3] = 1. + 4. * f4;
h[j,3] = h[j,3] + 2. * f4;
j= j+1; h[j,1] = i; h[j,2] = i+1;
h[j,3] = 10.;
j= j+1; h[j,1] = i; h[j,2] = i+3;
h[j,3] = -4. * f4;
h[j,3] = h[j,3] - 2. * f4;
j= j+1; h[j,1] = i+1; h[j,2] = i+1;
h[j,3] = 100. + 4. * f3;
h[j,3] = h[j,3] + 2. * f3;
j= j+1; h[j,1] = i+1; h[j,2] = i+2;
h[j,3] = -8. * f3;
h[j,3] = h[j,3] - 4. * f3;
j= j+1; h[j,1] = i+2; h[j,2] = i+2;
h[j,3] = 5. + 16. * f3;
h[j,3] = h[j,3] + 8. * f3;
j= j+1; h[j,1] = i+2; h[j,2] = i+3;
h[j,3] = -5.;
j= j+1; h[j,1] = i+3; h[j,2] = i+3;
h[j,3] = 5. + 4. * f4;
h[j,3] = h[j,3] + 2. * f4;
end;
return(h);
finish hs_nlp22;

n = 40;
352 F Chapter 14: Nonlinear Optimization Examples

x = j(1,n,0.);
do i=1 to n-3 by 4;
x[i] = 3.; x[i+1] = -1.; x[i+3] = 1.;
end;
opt = j(1,11,.); opt[2]= 3; opt[9]= 8 * (n / 4);
call nlpnra(xr,rc,"f_nlp22",x,opt) grd="g_nlp22" hes="hs_nlp22";

N OTE : If the sparse form of Hessian defines a diagonal matrix (that is, i D j in all nn rows), the NLPNRA
algorithm stores and processes a diagonal matrix G. If you do not specify any general linear constraints, the
NLPNRA subroutine uses only order n memory.

Finite-Difference Approximations of Derivatives

If the optimization technique needs first- or second-order derivatives and you do not specify the correspond-
ing IML module grd, hes, jac, or jacnlc, the derivatives are approximated by finite-difference
formulas using only calls of the module fun. If the optimization technique needs second-order derivatives
and you specify the grd module but not the hes module, the subroutine approximates the second-order
derivatives by finite differences using n or 2n calls of the grd module.
The eighth element of the opt argument specifies the type of finite-difference approximation used to compute
first- or second-order derivatives and whether the finite-difference intervals, h, should be computed by an
algorithm of Gill et al. (1983). The value of opt[8] is a two-digit integer, ij .

 If opt[8] is missing or j D 0, the fast but not very precise forward-difference formulas are used; if
j 0, the numerically more expensive central-difference formulas are used.

 If opt[8] is missing or i 1; 2; or 3, the finite-difference intervals h are based only on the information
of par[8] or par[9], which specifies the number of accurate digits to use in evaluating the objective
function and nonlinear constraints, respectively. If i D 1; 2; or 3, the intervals are computed with
an algorithm by Gill et al. (1983). For i D 1, the interval is based on the behavior of the objective
function; for i D 2, the interval is based on the behavior of the nonlinear constraint functions; and for
i D 3, the interval is based on the behavior of both the objective function and the nonlinear constraint
functions.

Forward-Difference Approximations

 First-order derivatives: n additional function calls are needed.


@f f .x C hi ei / f .x/
gi D D
@xi hi

 Second-order derivatives based on function calls only, when the grd module is not specified (Dennis
and Schnabel 1983): for a dense Hessian matrix, n C n2 =2 additional function calls are needed.

@2 f f .x C hi ei C hj ej / f .x C hi ei / f .x C hj ej / C f .x/
D
@xi @xj hi hj
Finite-Difference Approximations of Derivatives F 353

 Second-order derivatives based on gradient calls, when the grd module is specified (Dennis and
Schnabel 1983): n additional gradient calls are needed.

@2 f gi .x C hj ej / gi .x/ gj .x C hi ei / gj .x/
D C
@xi @xj 2hj 2hi

Central-Difference Approximations

 First-order derivatives: 2n additional function calls are needed.


@f f .x C hi ei / f .x hi ei /
gi D D
@xi 2hi

 Second-order derivatives based on function calls only, when the grd module is not specified
(Abramowitz and Stegun 1972): for a dense Hessian matrix, 2n C 2n2 additional function calls
are needed.

@2 f f .x C 2hi ei / C 16f .x C hi ei / 30f .x/ C 16f .x hi ei / f .x 2hi ei /


D
@xi2 12h2i
@2 f f .x C hi ei C hj ej / f .x C hi ei hj ej / f .x hi ei C hj ej / C f .x hi ei hj ej /
D
@xi @xj 4hi hj

 Second-order derivatives based on gradient calls, when the grd module is specified: 2n additional
gradient calls are needed.

@2 f gi .x C hj ej / gi .x h j ej / gj .x C hi ei / gj .x h i ei /
D C
@xi @xj 4hj 4hi

The step sizes hj , j D 1; : : : ; n, are defined as follows:

 For the forward-difference approximation of first-order derivatives using only function calls and for
second-order derivatives using only gradient calls,
p
hj D 2 j .1 C jxj j/.

 For the forward-difference approximation of second-order derivatives using only function calls and
p
for central-difference formulas, hj D 3 j .1 C jxj j/.

If the algorithm of Gill et al. (1983) is not used to compute j , a constant value  D j is used depending
on the value of par[8].

 If the number of accurate digits is specified by par8 D k1 , then  is set to 10 k1 .

 If par[8] is not specified,  is set to the machine precision, .

If central-difference formulas are not specified, the optimization algorithm switches automatically from the
forward-difference formula to a corresponding central-difference formula during the iteration process if one
of the following two criteria is satisfied:
354 F Chapter 14: Nonlinear Optimization Examples

 The absolute maximum gradient element is less than or equal to 100 times the ABSGTOL threshold.

 The term on the left of the GTOL criterion is less than or equal to
max.1E 6, 100GTOL threshold/. The 1E 6 ensures that the switch is performed even if you
set the GTOL threshold to zero.

The algorithm of Gill et al. (1983) that computes the finite-difference intervals hj can be very expensive
in the number of function calls it uses. If this algorithm is required, it is performed twice, once before the
optimization process starts and once after the optimization terminates.
Many applications need considerably more time for computing second-order derivatives than for computing
first-order derivatives. In such cases, you should use a quasi-Newton or conjugate gradient technique.
If you specify a vector, c, of nc nonlinear constraints with the nlc module but you do not specify the
jacnlc module, the first-order formulas can be used to compute finite-difference approximations of the
nc  n Jacobian matrix of the nonlinear constraints.
 
@ci
.rci / D ; i D 1; : : : ; nc; j D 1; : : : ; n
@xj

You can specify the number of accurate digits in the constraint evaluations with par[9]. This specification
also defines the step sizes hj , j D 1; : : : ; n.
N OTE : If you are not able to specify analytic derivatives and if the finite-difference approximations provided
by the subroutines are not good enough to solve your optimization problem, you might be able to implement
better finite-difference approximations with the grd, hes, jac, and jacnlc module arguments.

Parameter Constraints

You can specify constraints in the following ways:

 The matrix input argument blc enables you to specify boundary and general linear constraints.

 The IML module input argument nlc enables you to specify general constraints, particularly non-
linear constraints.

Specifying the BLC Matrix

The input argument blc specifies an n1  n2 constraint matrix, where n1 is two more than the number of
linear constraints, and n2 is given by

n if 1  n1  2
n2 D
n C 2 if n1 > 2

The first two rows define lower and upper bounds for the n parameters, and the remaining c D n1 2 rows
define general linear equality and inequality constraints. Missing values in the first row (lower bounds)
substitute for the largest negative floating point value, and missing values in the second row (upper bounds)
Parameter Constraints F 355

substitute for the largest positive floating point value. Columns n C 1 and n C 2 of the first two rows are not
used.
The following c rows of the blc argument specify c linear equality or inequality constraints:
n
X
aij xj . j D j / bi ; i D 1; : : : ; c
j D1

Each of these c rows contains the coefficients aij in the first n columns. Column n C 1 specifies the kind of
constraint, as follows:

 blcn C 1 D 0 indicates an equality constraint.

 blcn C 1 D 1 indicates a  inequality constraint.

 blcn C 1 D 1 indicates a  inequality constraint.

Column n C 2 specifies the right-hand side, bi . A missing value in any of these rows corresponds to a value
of zero.
For example, suppose you have a problem with the following constraints on x1 ,x2 , x3 , x4 :

2  x1  100
x2  40
0  x4

4x1 C 3x2 x3  30
x2 C 6x4  17
x1 x2 D 8
The following statements specify the matrix CON, which can be used as the blc argument to specify the
preceding constraints:

proc iml;
con = { 2 . . 0 . . ,
100 40 . . . . ,
4 3 -1 . -1 30 ,
. 1 . 6 1 17 ,
1 -1 . . 0 8 };

Specifying the NLC and JACNLC Modules

The input argument nlc specifies an IML module that returns a vector, c, of length nc, with the values, ci ,
of the nc linear or nonlinear constraints

ci .x/ D 0; i D 1; : : : ; nec
ci .x/  0; i D nec C 1; : : : ; nc
356 F Chapter 14: Nonlinear Optimization Examples

for a given input parameter point x.


N OTE : You must specify the number of equality constraints, nec, and the total number of constraints, nc,
returned by the nlc module to allocate memory for the return vector. You can do this with the opt[11] and
opt[10] arguments, respectively.
For example, consider the problem of minimizing the objective function
f .x1 ; x2 / D x1 x2 in the interior of the unit circle, x12 C x22  1. The constraint can also be written
as c1 .x/ D 1 x12 x22  0. The following statements specify modules for the objective and constraint
functions and call the NLPNMS subroutine to solve the minimization problem:

proc iml;
start F_UC2D(x);
f = x[1] * x[2];
return(f);
finish F_UC2D;

start C_UC2D(x);
c = 1. - x * x`;
return(c);
finish C_UC2D;

x = j(1,2,1.);
optn= j(1,10,.); optn[2]= 3; optn[10]= 1;
CALL NLPNMS(rc,xres,"F_UC2D",x,optn) nlc="C_UC2D";

To avoid typing multiple commas, you can specify the nlc input argument with a keyword, as in the
preceding code. The number of elements of the return vector is specified by OPTN10 D 1. There is a
missing value in OPTN11, so the subroutine assumes there are zero equality constraints.
The NLPQN algorithm uses the nc  n Jacobian matrix of first-order derivatives
 
@ci
.rx ci .x// D ; i D 1; : : : ; nc; j D 1; : : : ; n
@xj

of the nc equality and inequality constraints, ci , for each point passed during the iteration. You can use the
jacnlc argument to specify an IML module that returns the Jacobian matrix JC. If you specify the nlc
module without using the jacnlc argument, the subroutine uses finite-difference approximations of the
first-order derivatives of the constraints.
N OTE : The COBYLA algorithm in the NLPNMS subroutine and the NLPQN subroutine are the only
optimization techniques that enable you to specify nonlinear constraints with the nlc input argument.

Options Vector

The options vector, represented by the opt argument, enables you to specify a variety of options, such as
the amount of printed output or particular update or line-search techniques. Table 14.2 gives a summary of
the available options.
Options Vector F 357

Table 14.2 Summary of the Elements of the Options Vector


Index Description
1 specifies minimization, maximization, or the number of least squares
functions
2 specifies the amount of printed output
3 NLPDD, NLPLM, NLPNRA, NLPNRR, NLPTR: specifies the scaling
of the Hessian matrix (HESCAL)
4 NLPCG, NLPDD, NLPHQN, NLPQN: specifies the update technique
(UPDATE)
5 NLPCG, NLPHQN, NLPNRA, NLPQN (with no nonlinear constraints):
specifies the line-search technique (LIS)
6 NLPHQN: specifies version of hybrid algorithm (VERSION)
NLPQN with nonlinear constraints: specifies version of  update
7 NLPDD, NLPHQN, NLPQN: specifies initial Hessian matrix (INHES-
SIAN)
8 Finite-Difference Derivatives: specifies type of differences and how to
compute the difference interval
9 NLPNRA: specifies the number of rows returned by the sparse Hessian
module
10 NLPNMS, NLPQN: specifies the total number of constraints returned by
the nlc module
11 NLPNMS, NLPQN: specifies the number of equality constraints returned
by the nlc module

The following list contains detailed explanations of the elements of the options vector:

 opt[1]
indicates whether the problem is minimization or maximization. The default, opt1 D 0, specifies a
minimization problem, and opt1 D 1 specifies a maximization problem. For least squares problems,
opt1 D m specifies the number of functions or observations, which is the number of values returned
by the fun module. This information is necessary to allocate memory for the return vector of the
fun module.

 opt[2]
specifies the amount of output printed by the subroutine. The higher the value of opt[2], the more
printed output is produced. The following table indicates the specific items printed for each value.
Value of opt[2] Printed Output
0 No printed output is produced. This is the default.
1 The summaries for optimization start and termination are
produced, as well as the iteration history.
2 The initial and final parameter estimates are also printed.
3 The values of the termination criteria and other control pa-
rameters are also printed.
4 The parameter vector, x, is also printed after each iteration.
5 The gradient vector, g, is also printed after each iteration.
358 F Chapter 14: Nonlinear Optimization Examples

 opt[3]
selects a scaling for the Hessian matrix, G. This option is relevant only for the NLPDD, NLPLM,
NLPNRA, NLPNRR, and NLPTR subroutines. If opt3 0, the first iteration and each restart
.0/
iteration set the diagonal scaling matrix D.0/ D diag.di /, where
q
.0/ .0/
di D max.jGi;i j; /

.0/
and Gi;i are the diagonal elements of the Hessian matrix, and  is the machine precision. The diagonal
.0/
scaling matrix D.0/ D diag.di / is updated as indicated in the following table.
Value of opt[3] Scaling Update
0 No scaling is done.
1 Mor (1978) scaling update:
 q 
.kC1/ .k/ .k/
di D max di ; max.jGi;i j; /

2 Dennis, Gay, and Welsch (1981) scaling update:


 q 
.kC1/ .k/ .k/
di D max 0:6  di ; max.jGi;i j; /
q
.kC1/ .k/
3 di is reset in each iteration: di D max.jGi;i j; /
For the NLPDD, NLPNRA, NLPNRR, and NLPTR subroutines, the default is opt3 D 0; for the
NLPLM subroutine, the default is opt3 D 1.

 opt[4]
defines the update technique for (dual) quasi-Newton and conjugate gradient techniques. This option
applies to the NLPCG, NLPDD, NLPHQN, and NLPQN subroutines. For the NLPCG subroutine,
the following update techniques are available.
Value of opt[4] Update Method for NLPCG
1 automatic restart method of Powell (1977) and Beale
(1972). This is the default.
2 Fletcher-Reeves update (Fletcher 1987)
3 Polak-Ribiere update (Fletcher 1987)
4 conjugate-descent update of Fletcher (1987)
For the unconstrained or linearly constrained NLPQN subroutine, the following update techniques are
available.
Value of opt[4] Update Method for NLPQN
1 dual Broyden, Fletcher, Goldfarb, and Shanno (DBFGS)
update of the Cholesky factor of the Hessian matrix. This is
the default.
2 dual Davidon, Fletcher, and Powell (DDFP) update of the
Cholesky factor of the Hessian matrix
3 original Broyden, Fletcher, Goldfarb, and Shanno (BFGS)
update of the inverse Hessian matrix
4 original Davidon, Fletcher, and Powell (DFP) update of the
inverse Hessian matrix
Options Vector F 359

For the NLPQN subroutine used with the nlc module and for the NLPDD and NLPHQN subrou-
tines, only the first two update techniques in the second table are available.

 opt[5]
defines the line-search technique for the unconstrained or linearly constrained NLPQN subroutine, as
well as the NLPCG, NLPHQN, and NLPNRA subroutines. Refer to Fletcher (1987) for an introduc-
tion to line-search techniques. The following table describes the available techniques.
Value of opt[5] Line-Search Method
1 This method needs the same number of function and gradient calls
for cubic interpolation and cubic extrapolation; it is similar to a
method used by the Harwell subroutine library.
2 This method needs more function than gradient calls for quadratic
and cubic interpolation and cubic extrapolation; it is implemented
as shown in Fletcher (1987) and can be modified to exact line
search with the par[6] argument (see the section Control Param-
eters Vector on page 367). This is the default for the NLPCG,
NLPNRA, and NLPQN subroutines.
3 This method needs the same number of function and gradient calls
for cubic interpolation and cubic extrapolation; it is implemented
as shown in Fletcher (1987) and can be modified to exact line
search with the par[6] argument.
4 This method needs the same number of function and gradient calls
for stepwise extrapolation and cubic interpolation.
5 This method is a modified version of the opt[5]=4 method.
6 This method is the golden section line search of Polak (1971),
which uses only function values for linear approximation.
7 This method is the bisection line search of Polak (1971), which
uses only function values for linear approximation.
8 This method is the Armijo line-search technique of Polak (1971),
which uses only function values for linear approximation.
For the NLPHQN least squares subroutine, the default is a special line-search method that is based
on an algorithm developed by Lindstrm and Wedin (1984). Although it needs more memory, this
method sometimes works better with large least squares problems.

 opt[6]
is used only for the NLPHQN subroutine and the NLPQN subroutine with nonlinear constraints.
In the NLPHQN subroutine, it defines the criterion for the decision of the hybrid algorithm to step
in a Gauss-Newton or a quasi-Newton search direction. You can specify one of the three criteria
that correspond to the methods of Fletcher and Xu (1987). The methods are HY1 (opt[6]=1), HY2
(opt[6]=2), and HY3 (opt[6]=2), and the default is HY2.
In the NLPQN subroutine with nonlinear constraints, it defines the version of the algorithm used to
update the vector  of the Lagrange multipliers. The default is opt[6]=2, which specifies the approach
of Powell (1982a) and Powell (1982b). You can specify the approach of Powell (1978a) with opt[6]=1.

 opt[7]
defines the type of start matrix, G .0/ , used for the Hessian approximation. This option applies only to
360 F Chapter 14: Nonlinear Optimization Examples

the NLPDD, NLPHQN, and NLPQN subroutines. If opt[7]=0, which is the default, the quasi-Newton
algorithm starts with a multiple of the identity matrix where the scalar factor depends on par[10];
otherwise, it starts with the Hessian matrix computed at the starting point x .0/ .
 opt[8]
defines the type of finite-difference approximation used to compute first- or second-order derivatives
and whether the finite-difference intervals, h, should be computed by using an algorithm of Gill et al.
(1983). The value of opt[8] is a two-digit integer, ij .
If opt[8] is missing or j D 0, the fast but not very precise forward difference
formulas are used; if j 0, the numerically more expensive central-difference
formulas are used.
If opt[8] is missing or i 1; 2; or 3, the finite-difference intervals h are based
only on the information of par[8] or par[9], which specifies the number of ac-
curate digits to use in evaluating the objective function and nonlinear constraints,
respectively. If i D 1; 2; or 3, the intervals are computed with an algorithm by
Gill et al. (1983). For i D 1, the interval is based on the behavior of the objective
function; for i D 2, the interval is based on the behavior of the nonlinear constraint
functions; and for i D 3, the interval is based on the behavior of both the objective
function and the nonlinear constraint functions.
The algorithm of Gill et al. (1983) that computes the finite-difference intervals hj can be very ex-
pensive in the number of function calls it uses. If this algorithm is required, it is performed twice,
once before the optimization process starts and once after the optimization terminates. See the section
Finite-Difference Approximations of Derivatives on page 352 for details.
 opt[9]
indicates that the Hessian module hes returns a sparse definition of the Hessian, in the form of an
nn  3 matrix instead of the default dense n  n matrix. If opt[9] is zero or missing, the Hessian
module must return a dense n  n matrix. If you specify opt9 D nn, the module must return a sparse
nn  3 table. See the section Objective Function and Derivatives on page 347 for more details. This
option applies only to the NLPNRA algorithm. If the dense specification contains a large proportion
of analytical zero derivatives, the sparse specification can save memory and computer time.
 opt[10]
specifies the total number of nonlinear constraints returned by the nlc module. If you specify nc
nonlinear constraints with the nlc argument module, you must specify opt10 D nc to allocate
memory for the return vector.
 opt[11]
specifies the number of nonlinear equality constraints returned by the nlc module. If the first nec
constraints are equality constraints, you must specify opt11 D nec. The default value is opt11 D 0.

Termination Criteria

The input argument tc specifies a vector of bounds that correspond to a set of termination criteria that are
tested in each iteration. If you do not specify an IML module with the ptit argument, these bounds
Termination Criteria F 361

determine when the optimization process stops.


If you specify the ptit argument, the tc argument is ignored. The module specified by the ptit
argument replaces the subroutine that is used by default to test the termination criteria. The module is called
in each iteration with the current location, x, and the value, f , of the objective function at x. The module
must give a return code, rc, that decides whether the optimization process is to be continued or terminated.
As long as the module returns rc D 0, the optimization process continues. When the module returns rc 0,
the optimization process stops.
If you use the tc vector, the optimization techniques stop the iteration process when at least one of the
corresponding set of termination criteria are satisfied. Table 14.3 and Table 14.4 indicate the criterion
associated with each element of the tc vector. There is a default for each criterion, and if you specify a
missing value for the corresponding element of the tc vector, the default value is used. You can avoid
termination with respect to the ABSFTOL, ABSGTOL, ABSXTOL, FTOL, FTOL2, GTOL, GTOL2, and
XTOL criteria by specifying a value of zero for the corresponding element of the tc vector.

Table 14.3 Termination Criteria for the NLPNMS Subroutine


Index Description
1 maximum number of iterations (MAXIT)
2 maximum number of function calls (MAXFU)
3 absolute function criterion (ABSTOL)
4 relative function criterion (FTOL)
5 relative function criterion (FTOL2)
6 absolute function criterion (ABSFTOL)
7 FSIZE value used in FTOL criterion
8 relative parameter criterion (XTOL)
9 absolute parameter criterion (ABSXTOL)
9 size of final trust-region radius  (COBYLA algorithm)
10 XSIZE value used in XTOL criterion

Table 14.4 Termination Criteria for Other Subroutines


Index Description
1 maximum number of iterations (MAXIT)
2 maximum number of function calls (MAXFU)
3 absolute function criterion (ABSTOL)
4 relative gradient criterion (GTOL)
5 relative gradient criterion (GTOL2)
6 absolute gradient criterion (ABSGTOL)
7 relative function criterion (FTOL)
8 predicted function reduction criterion (FTOL2)
9 absolute function criterion (ABSFTOL)
10 FSIZE value used in GTOL and FTOL criterion
11 relative parameter criterion (XTOL)
12 absolute parameter criterion (ABSXTOL)
13 XSIZE value used in XTOL criterion
362 F Chapter 14: Nonlinear Optimization Examples

Criteria Used by All Techniques

The following list indicates the termination criteria that are used with all the optimization techniques:

 tc[1]
specifies the maximum number of iterations in the optimization process (MAXIT). The default values
are
NLPNMS: MAXIT=1000
NLPCG: MAXIT=400
Others: MAXIT=200
 tc[2]
specifies the maximum number of function calls in the optimization process (MAXFU). The default
values are
NLPNMS: MAXFU=3000
NLPCG: MAXFU=1000
Others: MAXFU=500
 tc[3]
specifies the absolute function convergence criterion (ABSTOL). For minimization, termination re-
quires f .k/ D f .x .k/ /  ABSTOL, while for maximization, termination requires f .k/ D f .x .k/ / 
ABSTOL. The default values are the negative and positive square roots of the largest double precision
value, for minimization and maximization, respectively.

These criteria are useful when you want to divide a time-consuming optimization problem into a series of
smaller problems.

Termination Criteria for NLPNMS

Since the Nelder-Mead simplex algorithm does not use derivatives, no termination criteria are available that
are based on the gradient of the objective function.
When the NLPNMS subroutine implements Powells COBYLA algorithm, it uses only one criterion other
than the three used by all the optimization techniques. The COBYLA algorithm is a trust-region method that
sequentially reduces the radius, , of a spheric trust region from the start radius, beg , which is controlled
with the par[2] argument, to the final radius, e nd , which is controlled with the tc[9] argument. The default
value for tc[9] is e nd D1E 4. Convergence to small values of e nd can take many calls of the function
and constraint modules and might result in numerical problems.
In addition to the criteria used by all techniques, the original Nelder-Mead simplex algorithm uses several
other termination criteria, which are described in the following list:

 tc[4]
specifies the relative function convergence criterion (FTOL). Termination requires a small relative
difference between the function values of the vertices in the simplex with the largest and smallest
function values.
.k/ .k/
jfhi flo j
.k/
 FTOL
max.jfhi /j; FSIZE/
Termination Criteria F 363

where FSIZE is defined by tc[7]. The default value is tc4 D 10 FDIGITS , where FDIGITS is con-
trolled by the par[8] argument. The par[8] argument has a default value of log10 ./, where  is the
machine precision. Hence, the default value for FTOL is .

 tc[5]
specifies another relative function convergence criterion (FTOL2). Termination requires a small stan-
.k/ .k/
dard deviation of the function values of the n C 1 simplex vertices x0 ; : : : ; xn .
s
1 X .k/
.f .xl / f .x .k/ //2  FTOL2
nC1
l

1 P .k/
where f .x .k/ / D nC1 l f .xl /. If there are a active boundary constraints at x
.k/ , the mean

and standard deviation are computed only for the n C 1 a unconstrained vertices. The default is
tc5 D1E 6.

 tc[6]
specifies the absolute function convergence criterion (ABSFTOL). Termination requires a small abso-
lute difference between the function values of the vertices in the simplex with the largest and smallest
function values.
.k/ .k/
jfhi flo j  ABSFTOL

The default is tc6 D 0.

 tc[7]
specifies the FSIZE value used in the FTOL termination criterion. The default is tc7 D 0.

 tc[8]
specifies the relative parameter convergence criterion (XTOL). Termination requires a small relative
parameter difference between the vertices with the largest and smallest function values.

maxj jxjlo xjhi j


 XTOL
max.jxjlo j; jxjhi j; XSIZE/

The default is tc8 D1E 8.

 tc[9]
specifies the absolute parameter convergence criterion (ABSXTOL). Termination requires either a
small length, .k/ , of the vertices of a restart simplex or a small simplex size, .k/ .

.k/  ABSXTOL
.k/  ABSXTOL

where .k/ is defined as the L1 distance of the simplex vertex with the smallest function value, y .k/ ,
.k/
to the other n simplex points, xl y.
.k/
X
.k/ D k xl y .k/ k1
xl y

The default is tc9 D1E 8.


364 F Chapter 14: Nonlinear Optimization Examples

 tc[10]
specifies the XSIZE value used in the XTOL termination criterion. The default is tc10 D 0.

Termination Criteria for Unconstrained and Linearly Constrained Techniques

 tc[4]
specifies the relative gradient convergence criterion (GTOL). For all techniques except the NLPCG
subroutine, termination requires that the normalized predicted function reduction is small.

g.x .k/ /T G .k/ 1 g.x .k/ /


 GTOL
max.jf .x .k/ /j; FSIZE/
where FSIZE is defined by tc[10]. For the NLPCG technique (where a reliable Hessian estimate is
not available),

k g.x .k/ / k22 k s.x .k/ / k2


 GTOL
k g.x .k/ / g.x .k 1/ / k2 max.jf .x .k/ /j; FSIZE/

is used. The default is tc4 D1E 8.

 tc[5]
specifies another relative gradient convergence criterion (GTOL2). This criterion is used only by the
NLPLM subroutine.

jgj .x .k/ /j
max q  GTOL2
j .k/
f .x .k/ /Gj;j

The default is tc[5]=0.

 tc[6]
specifies the absolute gradient convergence criterion (ABSGTOL). Termination requires that the max-
imum absolute gradient element be small.

max jgj .x .k/ /j  ABSGTOL


j

The default is tc6 D1E 5.

 tc[7]
specifies the relative function convergence criterion (FTOL). Termination requires a small relative
change of the function value in consecutive iterations.

jf .x .k/ / f .x .k 1/ /j
 FTOL
max.jf .x .k 1/ /j; F SIZE/

where F SIZE is defined by tc[10]. The default is tc7 D 10 FDIGITS , where FDIGITS is controlled
by the par[8] argument. The par[8] argument has a default value of log10 ./, where  is the machine
precision. Hence, the default for FTOL is .
Termination Criteria F 365

 tc[8]
specifies another function convergence criterion (FTOL2). For least squares problems, termination
requires a small predicted reduction of the objective function, df .k/  f .x .k/ / f .x .k/ C s .k/ /.
The predicted reduction is computed by approximating the objective function by the first two terms
of the Taylor series and substituting the Newton step, s .k/ D G .k/ 1 g .k/ , as follows:

1 .k/T .k/ .k/


df .k/ D g .k/T s .k/ s G s
2
1 .k/T .k/
D s g
2
 FTOL2

The FTOL2 criterion is the unscaled version of the GTOL criterion. The default is tc[8]=0.

 tc[9]
specifies the absolute function convergence criterion (ABSFTOL). Termination requires a small
change of the function value in consecutive iterations.

jf .x .k 1/
/ f .x .k/ /j  ABSFTOL

The default is tc[9]=0.

 tc[10]
specifies the FSIZE value used in the GTOL and FTOL termination criteria. The default is tc[10]=0.

 tc[11]
specifies the relative parameter convergence criterion (XTOL). Termination requires a small relative
parameter change in consecutive iterations.
.k/ .k 1/
maxj jxj xj j
.k/ .k 1/
 XTOL
max.jxj j; jxj j; XSIZE/

The default is tc[11]=0.

 tc[12]
specifies the absolute parameter convergence criterion (ABSXTOL). Termination requires a small
Euclidean distance between parameter vectors in consecutive iterations.

k x .k/ x .k 1/
k2  ABSXTOL

The default is tc[12]=0.

 tc[13]
specifies the XSIZE value used in the XTOL termination criterion. The default is tc[13]=0.
366 F Chapter 14: Nonlinear Optimization Examples

Termination Criteria for Nonlinearly Constrained Techniques

The only algorithm available for nonlinearly constrained optimization other than the NLPNMS subroutine
is the NLPQN subroutine, when you specify the nlc module argument. This method, unlike the other
optimization methods, does not monotonically reduce the value of the objective function or some kind of
merit function that combines objective and constraint functions. Instead, the algorithm uses the watchdog
technique with backtracking of Chamberlain et al. (1982). Therefore, no termination criteria are imple-
mented that are based on the values x or f in consecutive iterations. In addition to the criteria used by all
optimization techniques, there are three other termination criteria available; these are based on the Lagrange
function
m
X
L.x; / D f .x/ i ci .x/
i D1

and its gradient


m
X
rx L.x; / D g.x/ i rx ci .x/
i D1

where m denotes the total number of constraints, g D g.x/ is the gradient of the objective function, and 
is the vector of Lagrange multipliers. The Kuhn-Tucker conditions require that the gradient of the Lagrange
function is zero at the optimal point .x  ;  /, as follows:

rx L.x  ;  / D 0

 tc[4]
specifies the GTOL criterion, which requires that the normalized predicted function reduction be
small.
jg.x .k/ /s.x .k/ /j C m .k/ /j
P
i D1 ji ci .x
 GTOL
max.jf .x .k/ /j; FSIZE/
where FSIZE is defined by the tc[10] argument. The default is tc4 D1E 8.

 tc[6]
specifies the ABSGTOL criterion, which requires that the maximum absolute gradient element of the
Lagrange function be small.

max jfrx L.x .k/ ; .k/ /gj j  ABSGTOL


j

The default is tc6 D1E 5.

 tc[8]
specifies the FTOL2 criterion, which requires that the predicted function reduction be small.
m
X
jg.x .k/ /s.x .k/ /j C ji ci j  FTOL2
i D1

The default is tc8 D1E 6. This is the criterion used by the programs VMCWD and VF02AD of
Powell (1982b).
Control Parameters Vector F 367

Control Parameters Vector

For all optimization and least squares subroutines, the input argument par specifies a vector of parameters
that control the optimization process. For the NLPFDD and NLPFEA subroutines, the par argument is
defined differently. For each element of the par vector there exists a default value, and if you specify a
missing value, the default is used. Table 14.5 summarizes the uses of the par argument for the optimization
and least squares subroutines.

Table 14.5 Summary of the Control Parameters Vector


Index Description
1 specifies the singularity criterion (SINGULAR)
2 specifies the initial step length or trust-region radius
3 specifies the range for active (violated) constraints (LCEPS)
4 specifies the Lagrange multiplier threshold for constraints (LCDEACT)
5 specifies a criterion to determine linear dependence of constraints (LCS-
ING)
6 specifies the required accuracy of the line-search algorithms (LSPRECI-
SION)
7 reduces the line-search step size in successive iterations (DAMPSTEP)
8 specifies the number of accurate digits used in evaluating the objective
function (FDIGITS)
9 specifies the number of accurate digits used in evaluating the nonlinear
constraints (CDIGITS)
10 specifies a scalar factor for the diagonal of the initial Hessian (DIAHES)

 par[1]
specifies the singularity criterion for the decomposition of the Hessian matrix (SINGULAR). The
value must be between zero and one, and the default is par1 D1E 8.

 par[2]
specifies different features depending on the subroutine in which it is used. In the NLPNMS sub-
routine, it defines the size of the start simplex. For the original Nelder-Mead simplex algorithm, the
default value is par2 D 1; for the COBYLA algorithm, the default is par2 D 0:5. In the NLPCG,
NLPQN, and NLPHQN subroutines, the par[2] argument specifies an upper bound for the initial step
length for the line search during the first five iterations. The default initial step length is par2 D 1. In
the NLPTR, NLPDD, and NLPLM subroutines, the par[2] argument specifies a factor for the initial
trust-region radius, . For highly nonlinear functions, the default step length or trust-region radius
can result in arithmetic overflows. In that case, you can specify stepwise decreasing values of par[2],
such as par[2]=1E 1, par[2]=1E 2, par[2]=1E 4, until the subroutine starts to iterate successfully.

 par[3]
specifies the range (LCEPS) for active and violated linear constraints. The i th constraint is considered
368 F Chapter 14: Nonlinear Optimization Examples

an active constraint if the point x .k/ satisfies the condition



n
X
.k/

aij xj bi  LCEPS.jbi j C 1/
j D1

where LCEPS is the value of par[3] and aij and bi are defined as in the section Parameter Con-
straints on page 354. Otherwise, the constraint i is either an inactive inequality or a violated in-
equality or equality constraint. The default is par3 D1E 8. During the optimization process, the
introduction of rounding errors can force the subroutine to increase the value of par[3] by a power of
10, but the value never becomes larger than 1E 3.

 par[4]
specifies a threshold (LCDEACT) for the Lagrange multiplier that decides whether an active inequal-
ity constraint must remain active or can be deactivated. For maximization, par[4] must be positive,
and for minimization,
par[4] must be negative. The default is
  
par4 D min 0:01; max 0:1  ABSGTOL; 0:001  gmax.k/

where the positive value is for maximization and the negative value is for minimization. ABSGTOL
is the value of the absolute gradient criterion, and gmax.k/ is the maximum absolute element of the
gradient, g .k/ , or the projected gradient, Z T g .k/ .

 par[5]
specifies a criterion (LCSING) used in the update of the QR decomposition that decides whether an ac-
tive constraint is linearly dependent on a set of other active constraints. The default is par5 D1E 8.
As the value of par[5] increases, more active constraints are recognized as being linearly dependent.
If the value of par[5] is larger than 0:1, it is reset to 0:1, and if it is negative, it is reset to zero.

 par[6]
specifies the degree of accuracy (LSPRECISION) that should be obtained by the second or third line-
search algorithm. This argument can be used with the NLPCG, NLPHQN, and NLPNRA algorithms
and with the NLPQN algorithm if the nlc argument is specified. Usually, an imprecise line search
is computationally inexpensive and successful, but for more difficult optimization problems, a more
precise and time consuming line search can be necessary. Refer to Fletcher (1987) for details. If
you have numerical problems, you should decrease the value of the par[6] argument to obtain a more
precise line search. The default values are given in the following table.
Subroutine Update Method Default value
NLPCG All par[6] = 0.1
NLPHQN DBFGS par[6] = 0.1
NLPHQN DDFP par[6] = 0.06
NLPNRA No update par[6] = 0.9
Nlpqn BFGS, DBFGS par[6] = 0.4
NLPQN DFP, DDFP par[6] = 0.06
 par[7]
specifies a scalar factor (DAMPSTEP) that can be used to reduce the step size in each of the first five
iterations. In each of these iterations, the starting step size, .0/ , can be no larger than the value of
par[7] times the step size obtained by the line-search algorithm in the previous iteration. If par[7]
Printing the Optimization History F 369

is missing or ifpar[7]=0, which is the default, the starting step size in iteration t is computed as a
function of the function change from the former iteration, f .t 1/ f .t / . If the computed value is
outside the interval 0:1; 10:0, it is moved to the next endpoint. You can further restrict the starting
step size in the first five iterations with the par[2] argument.

 par[8]
specifies the number of accurate digits (FDIGITS) used to evaluate the objective function. The default
is log10 ./, where  is the machine precision, and fractional values are permitted. This value is
used to compute the step size h for finite-difference derivatives and the default value for the FTOL
termination criterion.

 par[9]
specifies the number of accurate digits (CDIGITS) used to evaluate the nonlinear constraint functions
of the nlc module. The default is log10 ./, where  is the machine precision, and fractional
values are permitted. The value is used to compute the step size h for finite-difference derivatives. If
first-order derivatives are specified by the jacnlc module, the par[9] argument is ignored.

 par[10]
specifies a scalar factor (DIAHES) for the diagonal of the initial Hessian approximation. This ar-
gument is available in the NLPDD, NLPHQN, and NLPQN subroutines. If the opt[7] argument is
not specified, the initial Hessian approximation is a multiple of the identity matrix determined by
the magnitude of the initial gradient g.x .0/ /. The value of the par[10] argument is used to specify
par10  I for the initial Hessian in the quasi-Newton algorithm.

Printing the Optimization History

Each optimization and least squares subroutine prints the optimization history, as long as opt2  1 and
you do not specify the ptit module argument. You can use this output to check for possible convergence
problems. If you specify the ptit argument, you can enter a print command inside the module, which is
called at each iteration.
The amount of information printed depends on the opt[2] argument. See the section Options Vector on
page 356.
The output consists of three main parts:

 Optimization Start Output


The following information about the initial state of the optimization can be printed:

the number of constraints that are active at the starting point, or, more precisely, the number of
constraints that are currently members of the working set. If this number is followed by a plus
sign (C), there are more active constraints, at least one of which is temporarily released from
the working set due to negative Lagrange multipliers.
the value of the objective function at the starting point
the value of the largest absolute (projected) gradient element
the initial trust-region radius for the NLPTR and NLPLM subroutines
370 F Chapter 14: Nonlinear Optimization Examples

 General Iteration History


In general, the iteration history consists of one line of printed output for each iteration, with the
exception of the Nelder-Mead simplex method. The NLPNMS subroutine prints a line only after
several internal iterations because some of the termination tests are time-consuming compared to the
simplex operations and because the subroutine typically uses many iterations.
The iteration history always includes the following columns:

iter is the iteration number.


nrest is the number of iteration restarts.
nfun is the number of function calls.
act is the number of active constraints.
optcrit is the value of the optimization criterion.
difcrit is the difference between adjacent function values.
maxgrad is the maximum of the absolute (projected) gradient components.

An apostrophe trailing the number of active constraints indicates that at least one of the active con-
straints was released from the active set due to a significant Lagrange multiplier.
Some subroutines print additional information at each iteration; for details see the entry that cor-
responds to each subroutine in the section Nonlinear Optimization and Related Subroutines on
page 823.

 Optimization Result Output


The output ends with the following information about the optimization result:

the number of constraints that are active at the final point, or more precisely, the number of
constraints that are currently members of the working set. When this number is followed by a
plus sign (C), there are more active constraints, at least one of which is temporarily released
from the working set due to negative Lagrange multipliers.
the value of the objective function at the final point
the value of the largest absolute (projected) gradient element

Nonlinear Optimization Examples

Example 14.1: Chemical Equilibrium

The following example is used in many test libraries for nonlinear programming. It appeared originally in
Bracken and McCormick (1968).
The problem is to determine the composition of a mixture of various chemicals that satisfy the mixtures
chemical equilibrium state. The second law of thermodynamics implies that at a constant temperature and
Example 14.1: Chemical Equilibrium F 371

pressure, a mixture of chemicals satisfies its chemical equilibrium state when the free energy of the mixture
is reduced to a minimum. Therefore, the composition of the chemicals satisfying its chemical equilibrium
state can be found by minimizing the free energy of the mixture.
The following notation is used in this problem:
m number of chemical elements in the mixture
n number of compounds in the mixture
xj number of moles for compound j , j D 1; :P ::;n
s total number of moles in the mixture, s D niD1 xj
aij number of atoms of element i in a molecule of compound j
bi atomic weight of element i in the mixture i D 1; : : : ; n
The constraints for the mixture are as follows. Each of the compounds must have a nonnegative number of
moles.
xj  0; j D 1; : : : ; n
There is a mass balance relationship for each element. Each relation is given by a linear equality constraint.
n
X
aij xj D bi ; i D 1; : : : ; m
j D1

The objective function is the total free energy of the mixture.


n h  x i
j
X
f .x/ D xj cj C ln
s
j D1

where
F0
 
cj D C ln.P /
RT j
and F 0 =RT j is the model standard free energy function for the j th compound. The value of F 0 =RT j
 

is found in existing tables. P is the total pressure in atmospheres.


The problem is to determine the parameters xj that minimize the objective function f .x/ subject to the
nonnegativity and linear balance constraints. To illustrate this, consider the following situation. Determine
the equilibrium composition of compound 12 N2 H4 C 12 O2 at temperature T D 3500 K and pressure P D
750 psi. The following table gives a summary of the information necessary to solve the problem.
aij
i =1 i =2 i =3
j Compound .F 0 =RT /j cj H N O
1 H 10:021 6:089 1
2 H2 21:096 17:164 2
3 H2 O 37:986 34:054 2 1
4 N 9:846 5:914 1
5 N2 28:653 24:721 2
6 NH 18:918 14:986 1 1
7 NO 28:032 24:100 1 1
8 O 14:640 10:708 1
9 O2 30:594 26:662 2
10 OH 26:111 22:179 1 1
372 F Chapter 14: Nonlinear Optimization Examples

The following statements solve the minimization problem:

proc iml;
c = { -6.089 -17.164 -34.054 -5.914 -24.721
-14.986 -24.100 -10.708 -26.662 -22.179 };
start F_BRACK(x) global(c);
s = x[+];
f = sum(x # (c + log(x / s)));
return(f);
finish F_BRACK;

con = { . . . . . . . . . . . . ,
. . . . . . . . . . . . ,
1. 2. 2. . . 1. . . . 1. 0. 2. ,
. . . 1. 2. 1. 1. . . . 0. 1. ,
. . 1. . . . 1. 1. 2. 1. 0. 1. };
con[1,1:10] = 1.e-6;

x0 = j(1,10, .1);
optn = {0 3};

title 'NLPTR subroutine: No Derivatives';


call nlptr(xres,rc,"F_BRACK",x0,optn,con);

The F_BRACK module specifies the objective function, f .x/. The matrix CON specifies the constraints. The
first row gives the lower bound for each parameter, and to prevent the evaluation of the log.x/ function for
values of x that are too small, the lower bounds are set here to 1E 6. The following three rows contain the
three linear equality constraints.
The starting point, which must be given to specify the number of parameters, is represented by X0. The first
element of the OPTN vector specifies a minimization problem, and the second element specifies the amount
of printed output.
The CALL NLPTR statement runs trust-region minimization. In this case, since no analytic derivatives are
specified, the F_BRACK module is used to generate finite-difference approximations for the gradient vector
and Hessian matrix.
The output is shown in the following figures. The iteration history does not show any problems.

Optimization Start

Active Constraints 3 Objective Function -45.05516448


Max Abs Gradient 4.4710303342 Radius 1
Element
Example 14.1: Chemical Equilibrium F 373

Max Abs Trust


Rest Func Act Objective Obj Fun Gradient Region
Iter arts Calls Con Function Change Element Lambda Radius

1 0 2 3' -47.33413 2.2790 4.3611 2.456 1.000


2 0 3 3' -47.70050 0.3664 7.0045 0.909 0.418
3 0 4 3 -47.73117 0.0307 5.3051 0 0.359
4 0 5 3 -47.73426 0.00310 3.7015 0 0.118
5 0 6 3 -47.73982 0.00555 2.3054 0 0.0169
6 0 7 3 -47.74846 0.00864 1.3029 90.133 0.00476
7 0 9 3 -47.75796 0.00950 0.5073 0 0.0134
8 0 10 3 -47.76094 0.00297 0.0988 0 0.0124
9 0 11 3 -47.76109 0.000155 0.00447 0 0.0111
10 0 12 3 -47.76109 3.386E-7 9.328E-6 0 0.00332

Optimization Results

Iterations 10 Function Calls 13


Hessian Calls 11 Active Constraints 3
Objective Function -47.76109086 Max Abs Gradient 6.6122907E-6
Element
Lambda 0 Actual Over Pred 0
Change
Radius 0.0033211642

The output lists the optimal parameters with the gradient.

Optimization Results
Parameter Estimates
Gradient
Objective
N Parameter Estimate Function

1 X1 0.040668 -9.785055
2 X2 0.147730 -19.570111
3 X3 0.783153 -34.792170
4 X4 0.001414 -12.968921
5 X5 0.485247 -25.937842
6 X6 0.000693 -22.753976
7 X7 0.027399 -28.190991
8 X8 0.017947 -15.222060
9 X9 0.037314 -30.444120
10 X10 0.096871 -25.007114

The three equality constraints are satisfied at the solution.


374 F Chapter 14: Nonlinear Optimization Examples

Linear Constraints Evaluated at Solution

1 ACT -3.192E-16 = -2.0000 + 1.0000 * X1 + 2.0000


* X2 + 2.0000 * X3 + 1.0000 * X6 +
1.0000 * X10
2 ACT 3.8164E-17 = -1.0000 + 1.0000 * X4 + 2.0000
* X5 + 1.0000 * X6 + 1.0000 * X7

3 ACT -2.637E-16 = -1.0000 + 1.0000 * X3 + 1.0000


* X7 + 1.0000 * X8 + 2.0000 * X9 +
1.0000 * X10

The Lagrange multipliers and the projected gradient are also printed. The elements of the projected gradient
must be small to satisfy a first-order optimality condition.

First Order Lagrange Multipliers

Lagrange
Active Constraint Multiplier

Linear EC [1] -9.785055


Linear EC [2] -12.968922
Linear EC [3] -15.222061

Projected Gradient

Free Projected
Dimension Gradient

1 0.000000142
2 -0.000000548
3 -0.000000472
4 -0.000006612
5 -0.000004683
6 -0.000004373
7 -0.000001815

Example 14.2: Network Flow and Delay

The following example is taken from the users guide of the GINO program (Liebman et al. 1986). A simple
network of five roads (arcs) can be illustrated by a path diagram.
The five roads connect four intersections illustrated by numbered nodes. Each minute, F vehicles enter and
leave the network. The parameter xij refers to the flow from node i to node j . The requirement that traffic
Example 14.2: Network Flow and Delay F 375

that flows into each intersection j must also flow out is described by the linear equality constraint
X X
xij D xj i ; j D 1; : : : ; n
i i

In general, roads also have an upper limit on the number of vehicles that can be handled per minute. These
limits, denoted cij , can be enforced by boundary constraints:

0  xij  cij

The goal in this problem is to maximize the flow, which is equivalent to maximizing the objective function
f .x/, where f .x/ is

f .x/ D x24 C x34

The boundary constraints are

0  x12 ; x32 ; x34  10


0 x13 ; x24  30

and the flow constraints are

x13 D x32 C x34


x24 D x12 C x32
x12 C x13 D x24 C x34

The three linear equality constraints are linearly dependent. One of them is deleted automatically by the
optimization subroutine. The following notation is used in this example:

X1 D x12 ; X 2 D x13 ; X 3 D x32 ; X 4 D x24 ; X 5 D x34

Even though the NLPCG subroutine is used, any other optimization subroutine would also solve this small
problem. The following code finds the maximum flow:

proc iml;
title 'Maximum Flow Through a Network';
start MAXFLOW(x);
f = x[4] + x[5];
return(f);
finish MAXFLOW;

con = { 0. 0. 0. 0. 0. . . ,
10. 30. 10. 30. 10. . . ,
0. 1. -1. 0. -1. 0. 0. ,
1. 0. 1. -1. 0. 0. 0. ,
1. 1. 0. -1. -1. 0. 0. };
x = j(1,5, 1.);
optn = {1 3};
call nlpcg(xres,rc,"MAXFLOW",x,optn,con);
376 F Chapter 14: Nonlinear Optimization Examples

The optimal solution is shown in the following output.

Optimization Results
Parameter Estimates
Gradient Active
Objective Bound
N Parameter Estimate Function Constraint

1 X1 10.000000 0 Upper BC
2 X2 10.000000 0
3 X3 10.000000 1.000000 Upper BC
4 X4 20.000000 1.000000
5 X5 -1.11022E-16 0 Lower BC

Finding the maximum flow through a network is equivalent to solving a simple linear optimization problem,
and for large problems, the LP procedure or the NETFLOW procedure of the SAS/OR product can be used.
On the other hand, finding a traffic pattern that minimizes the total delay to move F vehicles per minute
from node 1 to node 4 includes nonlinearities that need nonlinear optimization techniques. As traffic volume
increases, speed decreases. Let tij be the travel time on arc .i; j / and assume that the following formulas
describe the travel time as decreasing functions of the amount of traffic:

t12 D 5 C 0:1x12 =.1 x12 =10/


t13 D x13 =.1 x13 =30/
t32 D 1 C x32 =.1 x32 =10/
t24 D x24 =.1 x24 =30/
t34 D 5 C x34 =.1 x34 =10/

These formulas use the road capacities (upper bounds), and you can assume that F D 5 vehicles per minute
have to be moved through the network. The objective is now to minimize

f D f .x/ D t12 x12 C t13 x13 C t32 x32 C t24 x24 C t34 x34

The constraints are

0  x12 ; x32 ; x34  10


0 x13 ; x24  30

x13 D x32 C x34


x24 D x12 C x32
x24 C x34 D F D 5

In the following code, the NLPNRR subroutine is used to solve the minimization problem:
Example 14.2: Network Flow and Delay F 377

proc iml;
title 'Minimize Total Delay in Network';
start MINDEL(x);
t12 = 5. + .1 * x[1] / (1. - x[1] / 10.);
t13 = x[2] / (1. - x[2] / 30.);
t32 = 1. + x[3] / (1. - x[3] / 10.);
t24 = x[4] / (1. - x[4] / 30.);
t34 = 5. + .1 * x[5] / (1. - x[5] / 10.);
f = t12*x[1] + t13*x[2] + t32*x[3] + t24*x[4] + t34*x[5];
return(f);
finish MINDEL;

con = { 0. 0. 0. 0. 0. . . ,
10. 30. 10. 30. 10. . . ,
0. 1. -1. 0. -1. 0. 0. ,
1. 0. 1. -1. 0. 0. 0. ,
0. 0. 0. 1. 1. 0. 5. };

x = j(1,5, 1.);
optn = {0 3};
call nlpnrr(xres,rc,"MINDEL",x,optn,con);

The optimal solution is shown in the following output.

Optimization Results
Parameter Estimates
Gradient Active
Objective Bound
N Parameter Estimate Function Constraint

1 X1 2.500001 5.777778
2 X2 2.499999 5.702478
3 X3 -5.55112E-17 1.000000 Lower BC
4 X4 2.500001 5.702481
5 X5 2.499999 5.777778

The active constraints and corresponding Lagrange multiplier estimates (costs) are shown in the following
output.

Linear Constraints Evaluated at Solution

1 ACT -4.441E-16 = 0 + 1.0000 * X2 - 1.0000


* X3 - 1.0000 * X5
2 ACT 4.4409E-16 = 0 + 1.0000 * X1 + 1.0000
* X3 - 1.0000 * X4
3 ACT 4.4409E-16 = -5.0000 + 1.0000 * X4