{VERSION 3 0 "APPLE_PPC_MAC" "3.0" }
{USTYLETAB {CSTYLE "Maple Input" -1 0 "Courier" 0 1 255 0 0 1 0 1 0 0
1 0 0 0 0 }{CSTYLE "2D Math" -1 2 "Times" 0 1 0 0 0 0 0 0 2 0 0 0 0 0
0 }{CSTYLE "2D Comment" 2 18 "" 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 }
{CSTYLE "" -1 256 "" 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 }{CSTYLE "" -1 257
"" 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 }{CSTYLE "" -1 258 "" 0 1 0 0 0 0 0
1 0 0 0 0 0 0 0 }{CSTYLE "" -1 259 "" 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 }
{CSTYLE "" -1 260 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 }{CSTYLE "" -1 261
"" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 }{CSTYLE "" -1 262 "" 0 1 0 0 0 0 0
1 0 0 0 0 0 0 0 }{PSTYLE "Normal" -1 0 1 {CSTYLE "" -1 -1 "" 0 1 0 0
0 0 0 0 0 0 0 0 0 0 0 }0 0 0 -1 -1 -1 0 0 0 0 0 0 -1 0 }{PSTYLE "" 0
256 1 {CSTYLE "" -1 -1 "" 1 14 0 0 0 0 0 1 0 0 0 0 0 0 0 }0 0 0 -1 -1
-1 0 0 0 0 0 0 -1 0 }}
{SECT 0 {PARA 256 "" 0 "" {TEXT -1 54 "Linear Algebra, Infinite Dimens
ional Spaces, and Maple" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 ""
0 "" {TEXT -1 36 "Jim Herod, Georgia Tech, Atlanta, GA" }}{PARA 0 ""
0 "" {TEXT -1 21 "herod@math.gatech.edu" }}{PARA 0 "" 0 "" {TEXT -1 0
"" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT 256 11 "Sec
tion 3: " }{TEXT -1 53 " Self Adjoint Transformations in Inner-Product
Spaces" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 9
"Problem 1" }}{PARA 0 "" 0 "" {TEXT -1 271 " In Section 3, we saw \+
that if a linear transformation on an inner-product space is self-adjo
int, then the eigenvalues are real and eigenvectors corresponding to d
ifferent eigenvalues are orthogonal. Here is a way to compute eigenvec
tors for the matrices in the notes." }}{EXCHG {PARA 0 "> " 0 ""
{MPLTEXT 1 0 13 "with(linalg):" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT
1 0 56 "A:=matrix([[-5,1,3],[1,-2,-1],[-4,1,2]]);\neigenvects(A);" }}}
{PARA 0 "" 0 "" {TEXT -1 232 " From this calculation, we see that \+
[2, -1, 3] is an eigenvector corresponding to the eigenvalue -1. Also,
-2 is an eigenvalue of multiplicity two, but there is only one eigenv
ector corresponding to this eigenvalue -- [1, 0, 1]." }}{PARA 0 "" 0 "
" {TEXT -1 176 " With a little thought, it can be recognized that \+
the space spanned by the columns of a matrix is the same as the range \+
of A. Maple computes two connections with this idea." }}{EXCHG {PARA
0 "> " 0 "" {MPLTEXT 1 0 43 "colspace(A); colspan(A); colspan(A,'d'): \+
d;" }}}{PARA 0 "" 0 "" {TEXT -1 258 " The command colspace compute
a basis for the column space. On the other hand, colspan, computes a \+
spanning set for the column space. The result will be \"fraction free.
\" By adding a second argument, on can compute the dimension from the \+
command, colspan." }}{PARA 0 "" 0 "" {TEXT -1 99 " Because the rang
e of this 3 by 3 matrix A is three dimensional, we expect A to have an
inverse." }}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 11 "inverse(A);" }}
}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}{PARA 0 "" 0 "" {TEXT
-1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 9 "
Problem 2" }}{PARA 0 "" 0 "" {TEXT -1 65 " In Section 3, we also c
onsidered a linear transformation on " }{XPPEDIT 18 0 "L^2" "6#*$%\"LG
\"\"#" }{TEXT -1 52 "[0, 1] defined as follows; for each f in the spac
e, " }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 21 " \+
" }{TEXT 257 1 "K" }{TEXT -1 9 "[f](x) = " }
{XPPEDIT 18 0 "int(cos(Pi*(x-y))*f(y),y=0..1)" "6#-%$intG6$*&-%$cosG6#
*&%#PiG\"\"\",&%\"xGF,%\"yG!\"\"F,F,-%\"fG6#F/F,/F/;\"\"!\"\"\"" }
{TEXT -1 1 "." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 ""
{TEXT -1 53 "We illustrate our methods with assistance from Maple." }}
{PARA 0 "" 0 "" {TEXT -1 47 " First, note that K[f] can be re-writ
ten as" }}{PARA 0 "" 0 "" {TEXT -1 21 " " }{TEXT
258 1 "K" }{TEXT -1 18 "[f](x) = cos(¹ x) " }{XPPEDIT 18 0 "int(cos(Pi
*y)*f(y),y=0..1)" "6#-%$intG6$*&-%$cosG6#*&%#PiG\"\"\"%\"yGF,F,-%\"fG6
#F-F,/F-;\"\"!\"\"\"" }{TEXT -1 12 " + sin(¹ x) " }{XPPEDIT 18 0 "int(
sin(Pi*y)*f(y),y=0..1)" "6#-%$intG6$*&-%$sinG6#*&%#PiG\"\"\"%\"yGF,F,-
%\"fG6#F-F,/F-;\"\"!\"\"\"" }{TEXT -1 1 "." }}{PARA 0 "" 0 "" {TEXT
-1 30 "Thus, for each f, there is an " }{TEXT 260 1 "a" }{TEXT -1 7 " \+
and a " }{TEXT 261 1 "b" }{TEXT -1 10 " such that" }}{PARA 0 "" 0 ""
{TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 21 " " }
{TEXT 259 1 "K" }{TEXT -1 34 "[f](x) = a cos(¹ x) + b sin(¹ x)." }}
{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 39 "Hence, an
y eigenfunction g must satisfy" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}
{PARA 0 "" 0 "" {TEXT -1 12 " " }{XPPEDIT 18 0 "lambda" "6#
%'lambdaG" }{TEXT -1 42 "*g(x) = K[g](x) = a cos(¹ x) + b sin(¹ x)" }
}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 9 "for some \+
" }{XPPEDIT 18 0 "lambda" "6#%'lambdaG" }{TEXT -1 176 " and for some a
and b. Thus, g(x) must be in this form. We find the coefficients for \+
cos(¹ x) and sin(¹ x) with the requirement that they are associated wi
th the eigenfunction." }}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 30 "g:=
x->a*cos(Pi*x)+b*sin(Pi*x);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0
37 "K:=f->int(cos(Pi*(x-y))*f(y),y=0..1);" }}}{EXCHG {PARA 0 "> " 0 "
" {MPLTEXT 1 0 27 "lambda*g(x)=simplify(K(g));" }}}{PARA 0 "" 0 ""
{TEXT -1 18 "We seek solutions " }{XPPEDIT 18 0 "lambda" "6#%'lambdaG
" }{TEXT -1 13 " a = a/2 and " }{XPPEDIT 18 0 "lambda" "6#%'lambdaG" }
{TEXT -1 52 " b = b/2. re-writing these equations in matrix form," }}
{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 13 " \+
" }{XPPEDIT 18 0 "lambda*MATRIX([[1,0],[0,1]])" "6#*&%'lambdaG\"\"
\"-%'MATRIXG6#7$7$\"\"\"\"\"!7$F,\"\"\"F%" }{TEXT -1 1 " " }{XPPEDIT
18 0 "MATRIX([[a],[b]])" "6#-%'MATRIXG6#7$7#%\"aG7#%\"bG" }{TEXT -1 3
" = " }{XPPEDIT 18 0 "MATRIX([[1/2,0],[0,1/2]])" "6#-%'MATRIXG6#7$7$*&
\"\"\"\"\"\"\"\"#!\"\"\"\"!7$F-*&\"\"\"F*\"\"#F," }{TEXT -1 1 " " }
{XPPEDIT 18 0 "MATRIX([[a],[b]])" "6#-%'MATRIXG6#7$7#%\"aG7#%\"bG" }
{TEXT -1 1 "." }}{PARA 0 "" 0 "" {TEXT -1 74 "That is, we want eigenva
lues and eigenvectors for the matrix on the right." }}{EXCHG {PARA 0 "
> " 0 "" {MPLTEXT 1 0 38 "eigenvects(matrix([[1/2,0],[0,1/2]]));" }}}
{PARA 0 "" 0 "" {TEXT -1 191 "We did not need a computer to get the an
swer! The implications for the original problem is that cos(¹ x) and s
in(¹ x) are eigen functions corresponding to the eigenvalue 1/2. Here \+
is a check." }}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 34 "e1:=x->cos(Pi
*x);\nsimplify(K(e1));" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 34 "e
2:=x->sin(Pi*x);\nsimplify(K(e2));" }}}{EXCHG {PARA 0 "> " 0 ""
{MPLTEXT 1 0 0 "" }}}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 ""
{TEXT 262 11 "Section 4: " }{TEXT -1 31 " The Gerschgorin Circle Theor
em" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}
{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 13 "with(linalg):" }}}{PARA 0 "
" 0 "" {TEXT -1 177 " In this section, we give a geometric method \+
for finding bounds on the location of eigenvalues for matrices. As an \+
example, we illustrate with the matrix A given as follows:" }}{EXCHG
{PARA 0 "> " 0 "" {MPLTEXT 1 0 37 "A:=matrix([[2,0,1],[0,1,0],[1,0,2]]
);" }}}{PARA 0 "" 0 "" {TEXT -1 137 "The Gerschgorin Circle Theorem as
serts that all the eigenvalues in the complex plane for A are within 1
of 2 or are 1. we find them here." }{TEXT -1 0 "" }}{EXCHG {PARA 0 ">
" 0 "" {MPLTEXT 1 0 13 "eigenvals(A);" }}}{PARA 0 "" 0 "" {TEXT -1
208 " This section defines a matrix norm. The ideas will re-occur \+
when we consider the norm for a linear transformation. The 2-norm is c
alled the Euclidean norm and is the norm most frequently associated wi
th " }{XPPEDIT 18 0 "R^n" "6#)%\"RG%\"nG" }{TEXT -1 103 ". The matrix \+
norm defined in this section and the associated matrix norm can be acc
essed through Maple." }}{PARA 0 "" 0 "" {TEXT -1 29 " Here is th
e 2-norm in " }{XPPEDIT 18 0 "R^3" "6#*$%\"RG\"\"$" }{TEXT -1 1 "." }}
{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}
{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 13 "with(linalg):" }}}{EXCHG
{PARA 0 "> " 0 "" {MPLTEXT 1 0 16 "norm([x,y,z],2);" }}}{EXCHG {PARA
0 "> " 0 "" {MPLTEXT 1 0 42 "norm(matrix([[1,2,3],[1,2,3],[1,5,6]]),2)
;" }}}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 165 " \+
There are other norms: the 1-norm, the 3-norm, e.t.c. There is even
the infinity norm. These provide an interesting contrast and are also
accessible with Maple." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 ""
0 "" {TEXT -1 0 "" }}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 16 "norm([x
,y,z],1);" }}}{PARA 0 "" 0 "" {TEXT -1 37 "Use norm([x,y,z],1) is |x| \+
+|y| +|z|." }}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 42 "norm(matrix([[
1,2,2],[1,2,3],[1,5,6]]),1);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1
0 23 "norm([x,y,z],infinity);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1
0 49 "norm(matrix([[1,2,2],[1,2,3],[1,5,6]]),infinity);" }}}{EXCHG
{PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}{PARA 0 "" 0 "" {TEXT -1 0 "" }
}}{MARK "0 0" 54 }{VIEWOPTS 1 1 0 1 1 1803 }