Exterior calculus with R

The stokes package

To cite the stokes package in publications please use (Hankin 2022c). Ordinary differential calculus may be formalized and generalized to arbitrary-dimensional oriented manifolds using the exterior calculus. Here I show how the stokes package furnishes functionality for working with the exterior calculus, and provide numerical verification of a number of theorems. Notation follows that of Spivak (1965), and Hubbard and Hubbard (2015).

Recall that a k-tensor is a multilinear map S: Vk → ℝ, where V = ℝn is considered as a vector space; Spivak denotes the space of multilinear maps as 𝒥k(V). Formally, multilinearity means

S(v1, …, avi, …, vk) = a ⋅ S(v1, …, vi, …, vk)

and

S(v1, …, vi + vi′, …, vk) = S(v1, …, vi, …, xv) + S(v1, …, vi′, …, vk).

where vi ∈ V. If S ∈ 𝒥k(V) and T ∈ 𝒥l(V), then we may define S ⊗ T ∈ 𝒥k + l(V) as

S ⊗ T(v1, …, vk, vk + 1, …, vk + l) = S(v1, …, vk) ⋅ T(v1, …, vl).

Spivak observes that 𝒥k(V) is spanned by the nk products of the form

ϕi1 ⊗ ϕi2 ⊗ ⋯ ⊗ ϕik   1 ≤ ii, i2, …, ik ≤ n

where v1, …, vk is a basis for V and ϕi(vj) = δij; we can therefore write

S = ∑1 ≤ i1, …, ik ≤ nai1ikϕi1 ⊗ ⋯ ⊗ ϕik.

The space spanned by such products has a natural representation in R as an array of dimensions n × ⋯ × n = nk. If A is such an array, then the element A[i_1,i_2,...,i_k] is the coefficient of ϕi1 ⊗ … ⊗ ϕik. However, it is more efficient and conceptually cleaner to consider a sparse array, as implemented by the spray package. We will consider the case n = 5, k = 4, so we have multilinear maps from (ℝ5)4 to . Below, we will test algebraic identities in R using the idiom furnished by the stokes package. For our example we will define S = 1.5ϕ5 ⊗ ϕ1 ⊗ ϕ1 ⊗ ϕ1 + 2.5ϕ1 ⊗ ϕ1 ⊗ ϕ2 ⊗ ϕ3 + 3.5ϕ1 ⊗ ϕ3 ⊗ ϕ4 ⊗ ϕ2 using a matrix with three rows, one per term, and whose rows correspond to each term’s tensor products of the ϕ’s. We first have to load the stokes package:

library("stokes")

Then the idiom is straightforward:

k <- 4
n <- 5
M <- matrix(c(5,1,1,1, 1,1,2,3, 1,3,4,2),3,4,byrow=TRUE)
M
##      [,1] [,2] [,3] [,4]
## [1,]    5    1    1    1
## [2,]    1    1    2    3
## [3,]    1    3    4    2
S <- as.ktensor(M,coeffs= 0.5 + 1:3)
S
## A linear map from V^4 to R with V=R^5:
##              val
##  1 3 4 2  =  3.5
##  1 1 2 3  =  2.5
##  5 1 1 1  =  1.5

Observe that, if stored as an array of size nk, S would have 54 = 625 elements, all but three of which are zero. So S is a 4-tensor, mapping V4 to , where V = ℝ5. Here we have S = 1.5ϕ5 ⊗ ϕ1 ⊗ ϕ1 ⊗ ϕ1 + 2.5ϕ1 ⊗ ϕ1 ⊗ ϕ2 ⊗ ϕ3 + 3.5ϕ1 ⊗ ϕ3 ⊗ ϕ4 ⊗ ϕ2. Note that in some implementations the row order of object S will differ from that of M; this phenomenon is due to the underlying C implementation using the STL map class; see the disordR package (Hankin 2022a) and is discussed in more detail in the mvp package (Hankin 2022b).

Package idiom for evaluation of a tensor

First, we will define E to be a random point in Vk in terms of a matrix:

set.seed(0)
(E <- matrix(rnorm(n*k),n,k))   # A random point in V^k
##            [,1]         [,2]       [,3]       [,4]
## [1,]  1.2629543 -1.539950042  0.7635935 -0.4115108
## [2,] -0.3262334 -0.928567035 -0.7990092  0.2522234
## [3,]  1.3297993 -0.294720447 -1.1476570 -0.8919211
## [4,]  1.2724293 -0.005767173 -0.2894616  0.4356833
## [5,]  0.4146414  2.404653389 -0.2992151 -1.2375384

Recall that n = 5, k = 4, so E ∈ (ℝ5)4. We can evaluate S at E as follows:

f <- as.function(S)
f(E)
## [1] -3.068997

Vector space structure of tensors

Tensors have a natural vector space structure; they may be added and subtracted, and multiplied by a scalar, the same as any other vector space. Below, we define a new tensor S1 and work with 2S − 3S1:

S1 <- as.ktensor(1+diag(4),1:4)
2*S-3*S1
## A linear map from V^4 to R with V=R^5:
##              val
##  1 3 4 2  =    7
##  1 1 2 3  =    5
##  5 1 1 1  =    3
##  1 1 1 2  =  -12
##  1 1 2 1  =   -9
##  1 2 1 1  =   -6
##  2 1 1 1  =   -3

We may verify that tensors are linear using package idiom:

LHS <- as.function(2*S-3*S1)(E)
RHS <- 2*as.function(S)(E) -3*as.function(S1)(E)
c(lhs=LHS,rhs=RHS,diff=LHS-RHS)
##           lhs           rhs          diff 
##  2.374816e+00  2.374816e+00 -4.440892e-16

(that is, identical up to numerical precision).

Numerical verification of multilinearity in the package

Testing multilinearity is straightforward in the package. To do this, we need to define three matrices E1,E2,E3 corresponding to points in (ℝ5)4 which are identical except for one column. In E3, this column is a linear combination of the corresponding column in E2 and E3:

E1 <- E
E2 <- E
E3 <- E

x1 <- rnorm(n)
x2 <- rnorm(n)
r1 <- rnorm(1)
r2 <- rnorm(1)

E1[,2] <- x1
E2[,2] <- x2
E3[,2] <- r1*x1 + r2*x2

Then we can verify the multilinearity of S by coercing to a function which is applied to E1, E2, E3:

f <- as.function(S)
LHS <- r1*f(E1) + r2*f(E2)
RHS <- f(E3)
c(lhs=LHS,rhs=RHS,diff=LHS-RHS)
##        lhs        rhs       diff 
## -0.5640577 -0.5640577  0.0000000

(that is, identical up to numerical precision). Note that this is not equivalent to linearity over Vnk:

E1 <- matrix(rnorm(n*k),n,k)
E2 <- matrix(rnorm(n*k),n,k)
LHS <- f(r1*E1+r2*E2)
RHS <- r1*f(E1)+r2*f(E2)
c(lhs=LHS,rhs=RHS,diff=LHS-RHS)
##        lhs        rhs       diff 
##  0.1731245  0.3074186 -0.1342941

Tensor product of general tensors

Given two k-tensor objects S, T we can form the tensor product S ⊗ T, defined as

S ⊗ T(v1, …, vk, vk + 1, …, vk + l) = S(v1, …vk) ⋅ T(vk + 1, …vk + l)

We will calculate the tensor product of two tensors S1,S2 defined as follows:

(S1 <- ktensor(spray(cbind(1:3,2:4),1:3)))
## A linear map from V^2 to R with V=R^4:
##          val
##  3 4  =    3
##  2 3  =    2
##  1 2  =    1
(S2 <- as.ktensor(matrix(1:6,2,3)))
## A linear map from V^3 to R with V=R^6:
##            val
##  2 4 6  =    1
##  1 3 5  =    1

The R idiom for S1 ⊗ S2 would be tensorprod(), or %X%:

tensorprod(S1,S2)
## A linear map from V^5 to R with V=R^6:
##                val
##  1 2 1 3 5  =    1
##  3 4 1 3 5  =    3
##  1 2 2 4 6  =    1
##  2 3 2 4 6  =    2
##  2 3 1 3 5  =    2
##  3 4 2 4 6  =    3

Then, for example:

E <- matrix(rnorm(30),6,5)
LHS <- as.function(tensorprod(S1,S2))(E)
RHS <- as.function(S1)(E[,1:2]) * as.function(S2)(E[,3:5])
c(lhs=LHS,rhs=RHS,diff=LHS-RHS)
##       lhs       rhs      diff 
## -1.048329 -1.048329  0.000000

(that is, identical up to numerical precision).

Alternating forms

An alternating form is a multilinear map T satisfying

T(v1, …, vi, …, vj, …, vk) = −T(v1, …, vj, …, vi, …, vk)

(or, equivalently, T(v1, …, vi, …, vi, …, vk) = 0). We write Λk(V) for the space of all alternating multilinear maps from Vk to . Spivak gives Alt : 𝒥k(V) → Λk(V) defined by

$$\operatorname{Alt}(T)\left(v_1,\ldots,v_k\right)= \frac{1}{k!}\sum_{\sigma\in S_k}\operatorname{sgn}(\sigma)\cdot T{\left(v_{\sigma(1)},\ldots,v_{\sigma(k)}\right)} $$

where the sum ranges over all permutations of [n] = {1, 2, …, n} and sgn (σ) ∈ ±1 is the sign of the permutation. If T ∈ 𝒥k(V) and ω ∈ Λk(V), it is straightforward to prove that Alt (T) ∈ Λk(V), Alt (Alt (T)) = Alt (T), and Alt (ω) = ω.

In the stokes package, this is effected by the Alt() function:

S1
## A linear map from V^2 to R with V=R^4:
##          val
##  3 4  =    3
##  2 3  =    2
##  1 2  =    1
Alt(S1)
## A linear map from V^2 to R with V=R^4:
##           val
##  4 3  =  -1.5
##  3 2  =  -1.0
##  2 3  =   1.0
##  3 4  =   1.5
##  2 1  =  -0.5
##  1 2  =   0.5

Verifying that S1 is in fact alternating is straightforward:

E <- matrix(rnorm(8),4,2)
Erev <- E[,2:1]
as.function(Alt(S1))(E) + as.function(Alt(S1))(Erev)  # should be zero
## [1] 0

However, we can see that this form for alternating tensors (here called k-forms) is inefficient and highly redundant: in this example there is a 1 2 term and a 2 1 term (the coefficients are equal and opposite). In this example we have k = 2 but in general there would be potentially k! essentially repeated terms which collectively require only a single coefficient. The package provides kform objects which are inherently alternating using a more efficient representation; they are described using wedge products which are discussed next.

Wedge products and the exterior calculus

This section follows the exposition of Hubbard and Hubbard, who introduce the exterior calculus starting with a discussion of elementary forms, which are alternating forms with a particularly simple structure. An example of an elementary form would be dx1 ∧ dx3 [treated as an indivisible entity], which is an alternating multilinear map from n × ℝn to with

$$ \left( \mathrm{d}x_1\wedge\mathrm{d}x_3 \right)\left( \begin{pmatrix}a_1\\a_2\\a_3\\ \vdots\\ a_n\end{pmatrix}, \begin{pmatrix}b_1\\b_3\\b_3\\ \vdots\\ b_n\end{pmatrix} \right)=\mathrm{det} \begin{pmatrix} a_1 & b_1 \\ a_3 & b_3\end{pmatrix} =a_1b_3-a_3b_1 $$

That this is alternating follows from the properties of the determinant. In general of course, $\mathrm{d}x_i\wedge\mathrm{d}x_j\left( \begin{pmatrix}a_1\\ \vdots\\ a_n\end{pmatrix}, \begin{pmatrix}b_1\\ \vdots\\ b_n\end{pmatrix} \right)=\mathrm{det} \begin{pmatrix} a_i & b_i \\ a_j & b_j\end{pmatrix}$. Because such objects are linear, it is possible to consider sums of elementary forms, such as dx1 ∧ dx2 + 3dx2 ∧ dx3 with

$$ \left( \mathrm{d}x_1\wedge\mathrm{d}x_2 + 3\mathrm{d}x_2\wedge\mathrm{d}x_3 \right)\left( \begin{pmatrix}a_1\\a_2\\ \vdots\\ a_n\end{pmatrix}, \begin{pmatrix}b_1\\b_2\\ \vdots\\ b_n\end{pmatrix} \right)=\mathrm{det} \begin{pmatrix} a_1 & b_1\\ a_2 & b_2\end{pmatrix} +3\mathrm{det} \begin{pmatrix} a_2 & b_2\\ a_3 & b_3\end{pmatrix} $$

or even K = dx1 ∧ dx2 ∧ dx3 + 5dx1 ∧ dx2 ∧ dx4 which would be a linear map from (ℝn)3 to with

$$ \left( \mathrm{d}x_4\wedge\mathrm{d}x_2\wedge\mathrm{d}x_3 +5\mathrm{d}x_1\wedge\mathrm{d}x_2\wedge\mathrm{d}x_4 \right)\left( \begin{pmatrix}a_1\\a_2\\ \vdots\\ a_n\end{pmatrix}, \begin{pmatrix}b_1\\b_2\\ \vdots\\ b_n\end{pmatrix}, \begin{pmatrix}c_1\\c_2\\ \vdots\\ c_n\end{pmatrix} \right)=\mathrm{det} \begin{pmatrix} a_4 & b_4 & c_4\\ a_2 & b_2 & c_2\\ a_3 & b_3 & c_3 \end{pmatrix} +5\mathrm{det} \begin{pmatrix} a_1 & b_1 & c_1\\ a_2 & b_2 & c_2\\ a_4 & b_4 & c_4 \end{pmatrix}. $$

Defining K has ready R idiom in which we define a matrix whose rows correspond to the differentials in each term:

M <- matrix(c(4,2,3,1,4,2),2,3,byrow=TRUE)
M
##      [,1] [,2] [,3]
## [1,]    4    2    3
## [2,]    1    4    2
K <- as.kform(M,c(1,5))
K
## An alternating linear map from V^3 to R with V=R^4:
##            val
##  1 2 4  =   -5
##  2 3 4  =    1

Function as.kform() takes each row of M and places the elements in increasing order; the coefficient will change sign if the permutation is odd. Note that the order of the rows in K is immaterial and indeed in some implementations will appear in a different order: the stokes package uses the spray package, which in turn utilises the STL map class of C++.

Formal definition of dx

In the previous section we defined objects such as “dx1 ∧ dx6” as a single entity. Here I define the elementary form dxi formally and in the next section discuss the wedge product . The elementary form dxi is simply a map from n to with dxi(x1, x2, …, xn) = xi. Observe that dxi is an alternating form, even though we cannot swap arguments (because there is only one). Package idiom for creating an elementary form appears somewhat cryptic at first sight, but is consistent (it is easier to understand package idiom for creating more complicated alternating forms, as in the next section). Suppose we wish to work with dx3:

dx3 <- as.kform(matrix(3,1,1),1)
options(kform_symbolic_print = NULL) # revert to default print method
dx3
## An alternating linear map from V^1 to R with V=R^3:
##        val
##  3  =    1

Interpretation of the output above is not obvious (it is easier to understand the output from more complicated alternating forms, as in the next section), but for the moment observe that dx3 is indeed an alternating form, mapping n to with dx3(x1, x2, …, xn) = x3. Thus, for example:

as.function(dx3)(c(14,15,16))
## [1] 16
as.function(dx3)(c(14,15,16,17,18))  # idiom can deal with arbitrary vectors
## [1] 16

and we see that dx3 picks out the third element of a vector. These are linear in the sense that we may add and subtract these elementary forms:

dx5 <- as.kform(matrix(5,1,1),1)
as.function(dx3 + 2*dx5)(1:10)  # picks out element 3 + 2*element 5
## [1] 13

Formal definition of wedge product

The wedge product maps two alternating forms to another alternating form; given ω ∈ Λk(V) and η ∈ Λl(V), Spivak defines the wedge product ω ∧ η ∈ Λk + l(V) as

$$ \omega\wedge\eta={k+l\choose k\quad l}\operatorname{Alt}(\omega\otimes\eta) $$

and this is implemented in the package by function wedge(), or, more idiomatically, ^:

M1 <- matrix(c(3,5,4, 4,6,1),2,3,byrow=TRUE)
K1 <- as.kform(M1,c(2,7))
K1
## An alternating linear map from V^3 to R with V=R^6:
##            val
##  1 4 6  =    7
##  3 4 5  =   -2
M2 <- cbind(1:5,3:7)
K2 <- as.kform(M2,1:5)
K2
## An alternating linear map from V^2 to R with V=R^7:
##          val
##  5 7  =    5
##  4 6  =    4
##  3 5  =    3
##  2 4  =    2
##  1 3  =    1

In symbolic notation, K1 is equal to 7dx1 ∧ dx4 ∧ dx6 − 2dx3 ∧ dx4 ∧ dx5. and K2 is dx1 ∧ dx3 + 2dx2 ∧ dx4 + 3dx3 ∧ dx5 + 4dx4 ∧ dx6 + 5dx5 ∧ dx7. Package idiom for wedge products is straightforward:

K1 ^ K2
## An alternating linear map from V^5 to R with V=R^7:
##                val
##  1 3 4 5 6  =  -21
##  1 4 5 6 7  =  -35

(we might write the product as −35dx1 ∧ dx4 ∧ dx5 ∧ dx6 ∧ dx7 − 21dx1 ∧ dx3 ∧ dx4 ∧ dx5 ∧ dx6). See how the wedge product eliminates rows with repeated entries, gathers permuted rows together (respecting the sign of the permutation), and expresses the result in terms of elementary forms. The product is a linear combination of two elementary forms; note that only two coefficients out of a possible ${7\choose 5}=21$ are nonzero. Note again that the order of the rows in the product is arbitrary.

The wedge product has formal properties such as distributivity but by far the most interesting one is associativity, which I will demonstrate below:

F1 <- as.kform(matrix(c(3,4,5, 4,6,1,3,2,1),3,3,byrow=TRUE))
F2 <- as.kform(cbind(1:6,3:8),1:6)
F3 <- kform_general(1:8,2)
(F1 ^ F2) ^ F3
## An alternating linear map from V^7 to R with V=R^8:
##                    val
##  1 2 4 5 6 7 8  =   -5
##  2 3 4 5 6 7 8  =    6
##  1 2 3 5 6 7 8  =   11
##  1 2 3 4 5 6 8  =    1
##  1 2 3 4 5 7 8  =   -5
##  1 2 3 4 6 7 8  =    2
##  1 2 3 4 5 6 7  =    1
##  1 3 4 5 6 7 8  =   -2
F1 ^ (F2 ^ F3)
## An alternating linear map from V^7 to R with V=R^8:
##                    val
##  2 3 4 5 6 7 8  =    6
##  1 2 4 5 6 7 8  =   -5
##  1 2 3 4 5 7 8  =   -5
##  1 2 3 4 5 6 8  =    1
##  1 2 3 4 6 7 8  =    2
##  1 2 3 4 5 6 7  =    1
##  1 3 4 5 6 7 8  =   -2
##  1 2 3 5 6 7 8  =   11

Note carefully in the above that the terms in (F1 ^ F2) ^ F3 and F1 ^ (F2 ^ F3) appear in a different order. They are nevertheless algebraically identical, as we may demonstrate by calculating their difference:

(F1 ^ F2) ^ F3 - F1 ^ (F2 ^ F3)
## The zero alternating linear map from V^7 to R with V=R^n:
## empty sparse array with 7 columns

Spivak observes that Λk(V) is spanned by the $n\choose k$ wedge products of the form

dxi1 ∧ dxi2 ∧ … ∧ dxik   1 ≤ ii < i2 < ⋯ < ik ≤ n

where these products are the elementary forms (compare 𝒥k(V), which is spanned by nk elementary forms). Formally, multilinearity means every element of the space Λk(V) is a linear combination of elementary forms, as illustrated in the package by function kform_general(). Consider the following idiom:

Krel <- kform_general(4,2,1:6)
Krel
## An alternating linear map from V^2 to R with V=R^4:
##          val
##  3 4  =    6
##  2 4  =    5
##  1 4  =    4
##  2 3  =    3
##  1 3  =    2
##  1 2  =    1

Object Krel is a two-form, specifically a map from (ℝ4)2 to . Observe that Krel has ${4\choose 2}=6$ components, which do not appear in any particular order. Addition of such k-forms is straightforward in R idiom but algebraically nontrivial:

K1 <- as.kform(matrix(1:4,2,2),c(1,109))
K2 <- as.kform(matrix(c(1,3,7,8,2,4),ncol=2,byrow=TRUE),c(-1,5,4))
K1
## An alternating linear map from V^2 to R with V=R^4:
##          val
##  2 4  =  109
##  1 3  =    1
K2
## An alternating linear map from V^2 to R with V=R^8:
##          val
##  2 4  =    4
##  7 8  =    5
##  1 3  =   -1
K1+K2
## An alternating linear map from V^2 to R with V=R^8:
##          val
##  2 4  =  113
##  7 8  =    5

In the above, note how the dx2 ∧ dx4 terms combine [to give 2 4 = 113] and the dx1 ∧ dx3 term vanishes by cancellation.

Contractions

Given a k-form ϕ: Vk → ℝ and a vector v ∈ V, the contraction ϕv of ϕ and v is a k − 1-form with

ϕv(v1, …, vk − 1) = ϕ(v, v1, …, vk − 1)

if k > 1; we specify ϕv = ϕ(v) if k = 1. Verification is straightforward:

(o <- rform())  # a random 3-form
## An alternating linear map from V^3 to R with V=R^7:
##            val
##  2 4 6  =   -6
##  4 6 7  =    5
##  5 6 7  =    4
##  1 3 7  =    7
##  1 5 7  =  -12
##  1 4 6  =    8
##  2 3 7  =   -2
##  1 2 4  =    1
V <- matrix(runif(21),ncol=3)
LHS <- as.function(o)(V)
RHS <- as.function(contract(o,V[,1]))(V[,-1])
c(LHS=LHS,RHS=RHS,diff=LHS-RHS)
##          LHS          RHS         diff 
## 4.512547e-01 4.512547e-01 1.110223e-16

It is possible to iterate the contraction process; if we pass a matrix V to contract() then this is interpreted as repeated contraction with the columns of V:

as.function(contract(o,V[,1:2]))(V[,-(1:2),drop=FALSE])
## [1] 0.4512547

If we pass three columns to contract() the result is a 0-form:

contract(o,V)
## [1] 0.4512547

In the above, the result is coerced to a scalar; in order to work with a formal 0-form (which is represented in the package as a spray with a zero-column index matrix) we can use the lose=FALSE argument:

contract(o,V,lose=FALSE)
## An alternating linear map from V^0 to R with V=R^0:
##            val
##   =  0.4512547

Transformations and pullback

Suppose we are given a two-form ω = ∑i < jaijdxi ∧ dxj and relationships dxi = ∑rMirdyr, then we would have

ω = ∑i < jaij(∑rMirdyr) ∧ (∑rMjrdyr).

The general situation would be a k-form where we would have

ω = ∑i1 < ⋯ < ikai1ikdxi1 ∧ ⋯ ∧ dxik

giving

ω = ∑i1 < ⋯ < ik[ai1 < ⋯ < ik(∑rMi1rdyr) ∧ ⋯ ∧ (∑rMikrdyr)].

So ω was given in terms of dx1, …, dxk and we have expressed it in terms of dy1, …, dyk. So for example if

ω = dx1 ∧ dx2 + 5dx1 ∧ dx3

and

$$ \left( \begin{array}{l} dx_1\\ dx_2\\ dx_3 \end{array} \right)= \left( \begin{array}{ccc} 1 & 4 & 7\\ 2 & 5 & 8\\ 3 & 6 & 9\\ \end{array} \right) \left( \begin{array}{l} dy_1\\ dy_2\\ dy_3 \end{array} \right) $$

then

$$ \begin{array}{ccl} \omega &=& \left(1dy_1+4dy_2+7dy_3\right)\wedge \left(2dy_1+5dy_2+8dy_3\right)+ 5\left(1dy_1+4dy_2+7dy_3\right)\wedge \left(3dy_1+6dy_2+9dy_3\right) \\ &=&2dy_1\wedge dy_1+5dy_1\wedge dy_2+\cdots+ 5\cdot 7\cdot 6dx_3\wedge dx_2+ 5\cdot 7\cdot 9dx_3\wedge dx_3+\\ &=& -33dy_1\wedge dy_2-66dy_1\wedge dy_3-33dy_2\wedge dy_3 \end{array} $$

Function pullback() function does all this:

options(kform_symbolic_print = "dx")   # uses dx etc in print method
pullback(dx^dy+5*dx^dz, matrix(1:9,3,3))
## An alternating linear map from V^2 to R with V=R^3:
##  -33 dx^dy -66 dx^dz -33 dy^dz
options(kform_symbolic_print = NULL) # revert to default

However, it is slow and I am not 100% sure that there isn’t a much more efficient way to do such a transformation. There are a few tests in tests/testthat. Here I show that transformations may be inverted using matrix inverses:

(o <- 2 * as.kform(2) ^ as.kform(4) ^ as.kform(5))
## An alternating linear map from V^3 to R with V=R^5:
##            val
##  2 4 5  =    2
M <- matrix(rnorm(25),5,5)

Then we will transform according to matrix M and then transform according to the matrix inverse; the functionality works nicely with magrittr pipes:

o |> pullback(M) |> pullback(solve(M))
## An alternating linear map from V^3 to R with V=R^5:
##                      val
##  3 4 5  =  -2.775558e-17
##  1 2 3  =   3.469447e-17
##  2 3 5  =  -1.170938e-16
##  2 3 4  =   3.191891e-16
##  2 4 5  =   2.000000e+00
##  1 2 5  =  -2.081668e-17
##  1 3 4  =   1.006140e-16
##  1 2 4  =  -1.179612e-16
##  1 3 5  =   1.110223e-16
##  1 4 5  =   1.179612e-16

Above we see many rows with values small enough for the print method to print an exact zero, but not sufficiently small to be eliminated by the spray internals. We can remove the small entries with zap():

o |> pullback(M) |> pullback(solve(M)) |> zap()
##            val
##  2 4 5  =    2

See how the result is equal to the original k-form 2dy2 ∧ dy4 ∧ dy5.

Exterior derivatives

Given a k-form ω, Spivak defines the differential of ω to be a (k + 1)-form dω as follows. If

ω = ∑i1 < i2 < ⋯ < ikωi1i2ikdxi1 ∧ dxi2 ∧ ⋯ ∧ dxik

then

$$ \mathrm{d}\omega = \sum_{ i_1 < i_2 <\cdots<i_k} \sum_{\alpha=1}^n D_\alpha\left(\omega_{i_1i_2\ldots i_k}\right) \cdot \mathrm{d}x^{i_1}\wedge \mathrm{d}x^{i_2}\wedge\cdots\wedge\mathrm{d}x^{i_k} $$

where $D_if(a)=\lim_{h\longrightarrow 0}\frac{f(a^1,\ldots,a^i+h,\ldots,a^n)-f(a^1,\ldots,a^i,\ldots,a^n)}{h}$ is the ordinary ith partial derivative (Spivak, p25). Hubbard and Hubbard take a conceptually distinct approach and define the exterior derivative dϕ (they use a bold font, dϕ) of the k-form ϕ as the (k + 1)-form given by

$$ {d}\phi \left({v}_i,\ldots,{v}_{k+1}\right) = \lim_{h\longrightarrow 0}\frac{1}{h^{k+1}}\int_{\partial P_{x}\left(h{v}_1,\ldots,h{v}_{k+1}\right)}\phi $$

which, by their own account, is a rather opaque mathematical idiom. However, the definition makes sense and it is consistent with Spivak’s definition above. The definition allows one to express the fundamental theorem of calculus in an arbitrary number of dimensions without modification.

It can be shown that

d(fdxi1 ∧ ⋯ ∧ dxik) = df ∧ dxi1 ∧ ⋯ ∧ dxik

where f: ℝn → ℝ is a scalar function of position. The package provides grad() which, when given a vector x1, …, xn returns the one-form

$$ \sum_{i=1}^n x_idx_i $$

This is useful because $\mathrm{d}f=\sum_{j=1}^n\left(D_j f\right)\,\mathrm{d}x_j$. Thus

grad(c(0.4,0.1,-3.2,1.5))
## An alternating linear map from V^1 to R with V=R^4:
##         val
##  4  =   1.5
##  3  =  -3.2
##  2  =   0.1
##  1  =   0.4

We will use the grad() function to verify that, in n, a certain (k − 1)-form has zero work function. Motivated by the fact that

$$ F_3=\frac{1}{\left(x^2+y^2+z^2\right)^{3/2}} \begin{pmatrix}x\\y\\z\end{pmatrix} $$

is a divergenceless velocity field in 3, H&H go on to define [page 548, equation 6.7.16]

$$ \omega_{n}=\mathrm{d}\frac{1}{\left(x_1^2+\ldots +x_n^2\right)^{n/2}}\sum_{i=1}^{n}(-1)^{i-1} x_i\mathrm{d}x_1\wedge\cdots\wedge\widehat{\mathrm{d}x_i}\wedge\cdots\wedge\mathrm{d}x_n $$

(where a hat indicates the absence of a term), and show analytically that dω = 0. Here I show this using R idiom. The first thing is to define a function that implements the hat:

f <- function(x){
    n <- length(x)
    as.kform(t(apply(diag(n)<1,2,which)))
}

So, for example:

f(1:5)
## An alternating linear map from V^4 to R with V=R^5:
##              val
##  1 2 3 4  =    1
##  1 2 3 5  =    1
##  1 2 4 5  =    1
##  1 3 4 5  =    1
##  2 3 4 5  =    1

Then we can use the grad() function to calculate dω, using the quotient law to express the derivatives analytically:

df  <- function(x){
    n <- length(x)
    S <- sum(x^2)
    grad(rep(c(1,-1),length=n)*(S^(n/2) - n*x^2*S^(n/2-1))/S^n
    )
}

Thus

df(1:5)
## An alternating linear map from V^1 to R with V=R^5:
##                  val
##  5  =  -5.673207e-05
##  4  =   2.026145e-05
##  3  =   8.104581e-06
##  2  =  -2.836603e-05
##  1  =   4.052291e-05

Now we can use the wedge product of the two parts to show that the exterior derivative is zero:

x <- rnorm(9)
print(df(x) ^ f(x))  # should be zero
## An alternating linear map from V^9 to R with V=R^9:
##                                 val
##  1 2 3 4 5 6 7 8 9  =  3.388132e-21

Differential of the differential, d2 = 0

We can use the package to verify the celebrated fact that, for any k-form ϕ, d(dϕ) = 0. The first step is to define scalar functions f1(), f2(), f3(), all 0-forms:

f1 <- function(w,x,y,z){x + y^3 + x*y*w*z}
f2 <- function(w,x,y,z){w^2*x*y*z + sin(w) + w+z}
f3 <- function(w,x,y,z){w*x*y*z + sin(x) + cos(w)}

Now we need to define elementary 1-forms:

dw <- as.kform(1)
dx <- as.kform(2)
dy <- as.kform(3)
dz <- as.kform(4)

I will demonstrate the theorem by defining a 2-form which is the sum of three elementary two-forms, evaluated at a particular point in 4:

phi <-
  (
    +f1(1,2,3,4) ^ dw ^ dx
    +f2(1,2,3,4) ^ dw ^ dy
    +f3(1,2,3,4) ^ dy ^ dz
  )

We could use slightly slicker R idiom by defining elementary forms e1,e2,e3 and then defining phi to be a linear sum, weighted with 0-forms given by the (scalar) functions f1,f2,f3:

e1 <- dw ^ dx
e2 <- dw ^ dy
e3 <- dy ^ dz

phi <-
  (
    +f1(1,2,3,4) ^ e1
    +f2(1,2,3,4) ^ e2
    +f3(1,2,3,4) ^ e3
  )
phi
## An alternating linear map from V^2 to R with V=R^4:
##               val
##  1 3  =  29.84147
##  1 2  =  53.00000
##  3 4  =  25.44960

Now to evaluate first derivatives of f1() etc at point (1, 2, 3, 4), using Deriv() from the Deriv package:

library("Deriv")
Df1 <- Deriv(f1)(1,2,3,4)
Df2 <- Deriv(f2)(1,2,3,4)
Df3 <- Deriv(f3)(1,2,3,4)

So Df1 etc are numeric vectors of length 4, for example:

Df1
##  w  x  y  z 
## 24 13 35  6

To calculate dphi, or dϕ, we can use function grad():

dphi <-
  (
    +grad(Df1) ^ e1
    +grad(Df2) ^ e2
    +grad(Df3) ^ e3
  )
dphi
## An alternating linear map from V^3 to R with V=R^4:
##                 val
##  1 3 4  =  30.15853
##  1 2 3  =  23.00000
##  1 2 4  =   6.00000
##  2 3 4  =  11.58385

Now work on the differential of the differential. First evaluate the Hessians (4x4 numeric matrices) at the same point:

Hf1 <- matrix(Deriv(f1,nderiv=2)(1,2,3,4),4,4)
Hf2 <- matrix(Deriv(f2,nderiv=2)(1,2,3,4),4,4)
Hf3 <- matrix(Deriv(f3,nderiv=2)(1,2,3,4),4,4)

For example

Hf1
##    w  x  y z
## w  0 12  8 6
## x 12  0  4 3
## y  8  4 18 2
## z  6  3  2 0

(note the matrix is symmetric; also note carefully the nonzero diagonal term). But ddϕ is clearly zero as the Hessians are symmetrical:

ij <- expand.grid(seq_len(nrow(Hf1)),seq_len(ncol(Hf1)))

ddphi <- # should be zero
  (
    +as.kform(ij,c(Hf1))
    +as.kform(ij,c(Hf2))
    +as.kform(ij,c(Hf3))
  )

ddphi
## The zero alternating linear map from V^2 to R with V=R^n:
## empty sparse array with 2 columns

as expected.

Stokes’s theorem

In its most general form, Stokes’s theorem states

Xϕ = ∫Xdϕ

where X ⊂ ℝn is a compact oriented (k + 1)-dimensional manifold with boundary X and ϕ is a k-form defined on a neighborhood of X.

We will verify Stokes, following 6.9.5 of Hubbard in which

$$ \phi= \left(x_1-x_2^2+x_3^3-\cdots\pm x_n^n\right) \left( \sum_{i=1}^n \mathrm{d}x_1\wedge\cdots\wedge\widehat{\mathrm{d}x_i}\wedge\cdots\wedge\mathrm{d}x_n \right) $$

(a hat indicates that a term is absent), and we wish to evaluate Caϕ where Ca is the cube 0 ≤ xj ≤ a, 1 ≤ j ≤ n. Stokes tells us that this is equal to Cadϕ, which is given by

dϕ = (1 + 2x2 + ⋯ + nxnn − 1)dx1 ∧ ⋯ ∧ dxn

and so the volume integral is just

$$ \sum_{j=1}^n \int_{x_1=0}^a \int_{x_2=0}^a \cdots \int_{x_i=0}^a jx_j^{j-1} dx_1 dx_2\ldots dx_n= a^{n-1}\left(a+a^2+\cdots+a^n\right). $$

Stokes’s theorem, being trivial, is not amenable to direct numerical verification but the package does allow slick creation of ϕ:

phi <- function(x){
    n <- length(x)
    sum(x^seq_len(n)*rep_len(c(1,-1),n)) * as.kform(t(apply(diag(n)<1,2,which)))
}
phi(1:9)
## An alternating linear map from V^8 to R with V=R^9:
##                            val
##  2 3 4 5 6 7 8 9  =  371423053
##  1 2 3 4 5 7 8 9  =  371423053
##  1 3 4 5 6 7 8 9  =  371423053
##  1 2 3 4 6 7 8 9  =  371423053
##  1 2 3 4 5 6 7 9  =  371423053
##  1 2 4 5 6 7 8 9  =  371423053
##  1 2 3 5 6 7 8 9  =  371423053
##  1 2 3 4 5 6 8 9  =  371423053
##  1 2 3 4 5 6 7 8  =  371423053

(recall that phi is a function that maps 9 to 8-forms. Here we choose (1, 2, …, 9) ∈ ℝ9 and phi(1:9) as shown above is the resulting 8-form. Thus, if we write ϕ1 : 9 for phi(1:9) we would have ϕ1 : 9: (ℝ9)8 → ℝ, with package idiom as follows:

f <- as.function(phi(1:9))
E <- matrix(runif(72),9,8)   # (R^9)^8
f(E)
## [1] -26620528

Further, dϕ is given by

dphi <- function(x){
    nn <- seq_along(x)
    sum(nn*x^(nn-1)) * as.kform(seq_along(x))
}
dphi(1:9)
## An alternating linear map from V^9 to R with V=R^9:
##                              val
##  1 2 3 4 5 6 7 8 9  =  405071317

(observe that dphi(1:9) is a 9-form, with dϕ1 : 9: (ℝ9)9 → ℝ). Now consider Spivak’s theorem 4.6 (page 82), which in this context states that a 9-form is proportional to the determinant of the 9 × 9 matrix formed from its arguments, with constant of proportionality equal to the form evaluated on the identity matrix I9 [formally and more generally, if v1, …, vn is a basis for V, ω ∈ Λn(V) and wi = ∑aijvj then ω(w1, …, wn) = det (aij) ⋅ ω(v1, …vn)]. Numerically:

f <- as.function(dphi(1:9))
E <- matrix(runif(81),9,9)
f(E)
## [1] -9850953
det(E)*f(diag(9))  # should match f(E) by Spivak's 4.6
## [1] -9850953

References

Hankin, R. K. S. 2022a. “Disordered Vectors in R: Introducing the disordR Package.” https://arxiv.org/abs/2210.03856; arXiv. https://doi.org/10.48550/ARXIV.2210.03856.
———. 2022b. “Fast Multivariate Polynomials in r: The mvp Package.” arXiv. https://doi.org/10.48550/ARXIV.2210.15991.
———. 2022c. “Stokes’s Theorem in R.” arXiv. https://doi.org/10.48550/ARXIV.2210.17008.
Hubbard, J. J., and B. B. Hubbard. 2015. Vector Calculus, Linear Algebra, and Differential Forms: A Unified Approach. Fifth. Matrix Editions.
Spivak, M. 1965. Calculus on Manifolds. Addison-Wesley.