Wild linear algebra : videos WLA 1 42
This course by N J Wildberger presents a geometrical view to Linear Algebra, with a focus on applications. We look at vectors, matrices, determinants, change of bases, row reduction, lines and planes, polynomial spaces, bases and coordinate vectors, and much more!
We are aiming for a careful exposition with an orientation on conceptual understanding, applications, and explicit examples. There are quite a few problems to challenge the viewer. It is perfectly possible to learn Linear Algebra from scratch with this course. I suggest to go through the series at about one a week, keep notes, and do all the problems. Good luck! 
This is the full first lecture of a course on Linear Algebra. Given by N J Wildberger of the School of Mathematics and Statistics at UNSW, the course gives a more geometric and natural approach to this important subject, with lots of interesting applications. Our orientation is that Linear Algebra is really ``Linear Algebraic Geometry'': so teaching the algebra without the geometry is depriving the student of the heart of the subject.
The first lecture discusses the affine grid plane and introduces vectors, along with the number one problem of linear algebra: how to invert a linear change of coordinates! Intended audience: first year college or undergraduate students, motivated high school students, high school teachers, general public interested in mathematics. Enjoy! 

Here we give basic constructions with vectors and discuss the laws of vector arithmetic. Affine combinations of vectors are particularly important.
This is the second lecture of a first course on linear algebra, given by N J Wildberger at UNSW. This course will present a more geometric and application oriented approach to linear algebra. We also look at applications to several problems in geometry, such as the facts that the diagonals of a parallelogram bisect each other, and the medians of a triangle meet at a point. 

Here we discuss applications of vectors, first of all to forces, velocity and acceleration, and discuss two of Newton's important laws. We look at motion in the grid plane, and discuss two interesting games: the racetrack game and the trajectory game.
This is the third lecture in a first course on Linear Algebra given by N J Wildberger at UNSW. We introduce some graphical games (Racetrack Game, Trajectory Game) with velocities and accelerations in a grid plane, and then discuss center of mass, Archimedes principle of the lever, and barycentric coordinates. Lots of interesting topics! 

Area and volume in Linear Algebra are central concepts that underpin the entire subject, and lead naturally to the rich theory of determinants, a key subject of 18th and 19th century mathematics.
This is the fourth lecture of a first course on Linear Algebra, given by N J Wildberger. Here we start with a pictorial treatment of area, then move to an algebraic formulation using bivectors. These are twodimensional versions of vectors introduced in the 1840's by Grassmann. The three dimensional case of volume uses trivectors. 

This is the 5th lecture of this course on Linear Algebra. We analyse the fundamental problem of inverting a change of coordinates, and give applications to solving a system of two linear equations in two unknowns. This problem has two interpretations, one in terms of meets of lines, and one in terms of combining vectors to form a given vector.
We introduce vectors and matrices in a purely algebraic way. Composing two changes of variable leads to the basic multiplication formula for matrices. This has a close connection with determinants. This course is given by Assoc Prof N J Wildberger of UNSW, also the discoverer of Rational Trigonometry, which is explained in the WildTrig series. CONTENT SUMMARY: pg 1: @00:08 inverting the relationship between two pairs of variables; pg 2: @02:59 application of change of coordinates; pg 3: @05:43 changing coordinates is related to solving a system of linear equations; a family of problems pg 4: @10:06 example with parallel lines; zero determinant then 2 lines parallel (2dim); pg 5: @11:54 Vector interpretation of a linear system; pg 6: @15:45 example of 2 vector system without solution; pg 7: @17:39 change of coordinants as heart of the subject; pg 8: @21:26 generalize the example on the previous page; pg 9: @24:09 matrix notation; a column vector; matrix/vector multiplication; pg 10: @26:20 writing a pair of linear equations in matrix/vector form; pg 11: @29:34 arithmetic with matrices and vectors; introducing notation independent of application; you might think of this page as the start of a course in linear algebra; column vectors, scalar multiplication, addition, subtraction; approach is independent of any geometric interpretation; pg 12: @32:26 laws of vector arithmetic; pg 13: @33:42 geometrical interpretation of abstract column vectors; pg 14: @35:19 A matrix as an array of numbers; scalar multiplication, addition, subtraction; matrices follow the same laws as vectors; pg 15: @38:37 define product of matrix and a column vector; define product of two 2by2 matrices; pg 16: @41:45 examples; pg 17: @44:21 determinants; alternate notation; theorem: determinant of product of matrices is equal to the product of the determinants of 2 matrices; exercises 5.(1:2) ; pg 18: @46:19 exercises 5.(3:5) ; 

This is the 6th lecture in this course on Linear Algebra. We review matrix arithmetic for 2x2 matrices and compare the main laws with the simpler case of 1x1 matrices. Then we apply 2x2 matrices to study transformations of vectors, with examples including reflections, rotations and dilations.
We visualize these with pictures using the standard Cartesian coordinate framework. The concept of a linear transformation is also introduced. This is part of a series on Linear Algebra given by N J Wildberger, also the discoverer of Rational Trigonometry. 

This is the 7th lecture in this course on Linear Algebra. Here we continue discussing 2x2 matrices, their interpretation as linear transformations of the plane, how to analyse rotations, including a rational formulation, and how to combine rotations and reflections.
Finally we discuss the connections with calculus, introducing the idea that the derivative is really a linear transformation. CONTENT SUMMARY: pg 1: @00:08 a bit of review; matrix/vector multiplication; define a mapping/function/transformation; A linear transformation; pg 2: @02:40 proof of transformation linearity; pg 3: @05:25 rule implied by knowledge of linearity; mapped base vectors; area dilation factor @09:20; pg 4: @10:37 determining how the basis vectors transform; The columns of the transformation matrix are the transformations of the basis vectors; pg 5: @11:56 examples; pg 6: @14:44 example continued; rotations; unit circle; rotation matrix; pg 7: @19:14 rotations by 30degree's, 45degree's, 60degree's pg 8: @23:49 some trig identities; exercise 3.1 pg 9: @26:28 Rational parametrization; alternate rotation matrix; exercise pg 10: @30:22 reflection pg 11: @35:11 reflection continued; Composition of linear transformations pg 12: @38:02 example: rotation/reflection composition pg 13: @39:24 example continued; pg 14: @42:44 Linear approximations to nonlinear maps; globally nonlinear/locally approx._linear; differential calculus mentioned; pg 15: @45:50 example of linear approx. to nonlinear map; Leibniz's notation; pg 16: @50:28 example continued; the derivative matrix at a point; Lesson derivatives are linear transformations @51:19 ; pg 17: @52:18 exercises 7.34 ; pg 18: @53:27 exercises 7.57 ; (THANKS to EmptySpaceEnterprise) 

This is the very important 8th lecture in this series on Linear Algebra by N J Wildberger. Here we solve the most fundamental problem in the subject in the 3x3 casein such a way that extension to higher dimensions becomes almost obvious.
What is the fundamental problem? It is: How to invert a linear change of coordinates? Or in matrix terms: How to find the inverse of a matrix? And the answer rests squarely on the wonderful function called the determinant. Be prepared for some algebra, but it is beautiful algebra! CONTENT SUMMARY: pg 1: @00:08 How to invert the change in coordinates; 3x3 matrix; 2x2 review; pg 2: @02:12 importance of the determinant; determinant relation to trivectors; pg 3: @05:40 different ways of obtaining the determinant; pg 4: @09:44 solving the 3x3 linear system; pg 5: @14:28 solving the system continued; pg 6: @16:46 3x3 inversion theorem derived; pg 7: @17:56 notation to help remember the 3x3 inversion formula; definition of the minor of a matrix; pg 8: @20:16 Definition of the adjoint of a matrix; relationship of the inverse, determinant and adjoint of a matrix; pg 9: @22:47 examples; determination of the adjoint; determination of the inverse; matrix times its inverse; the identity matrix; pg 10: @31:39 example; pg 11: @34:52 3x3 matrix operations; pg 12: @38:09 why the inverse law works; properties of a 3x3 matrix; an invertible matrix; pg 13: @40:10 Proposition: If 2 matrices are invertible then so is their product, and the inverse of the product is equal to the product of their inverses (rearranged); proof; pg 14: @42:18 exercises 8.(1:2) ; pg 15: @43:41 exercises 8.(3:4) ; 

This is the ninth lecture of this course on Linear Algebra by N J Wildberger. Here we give a gentle introduction to three dimensional space, starting with the analog of a grid plane built from a packing of parallelopipeds in space.
We discuss two different ways of drawing 3D objects in 2D, emphasizing the importance of parallel projection. Some discussion of the nature of space and modern physics, then an introduction of affine space via coordinates. The distinctions between points and vectors is important, and we talk also about lines and planes. CONTENT SUMMARY: Introduction: @00:07 3 dimensional geometry; gaining intuition; expect models and pictures and some philosophy; parallelpipeds; pattern generated by 3 basic vectors; arithmetizing space; affine situation; no notion of length; pg 1: @04:15 Perspective projection; Parallel projection @06:29; pg 2: @09:42 example of parallel projection; suggested exercises @14:00 pg 3: @15:36 coordinate axes; right handed configuration; pg 4: @20:05 The nature of space; remark to base our understanding of space on arithmetic; pg 5: @23:43 A point in space  a triple of numbers; a point rather than a vector @28:20 ; pg 6: @29:09 A vector; remark on importance of distinction between points and vectors; pg 7: @31:58 points, lines, planes; pg 8: @33:25 relations between 2 lines in space; identical, parallel, intersecting, skew; pg 9: @34:27 determination of plane; relations between 2 planes; identical, parallel, intersecting; pg 10: @35:35 Affine space; Vector space; vector space has structure that an affine space has not; pg 11: @39:51 exercises 9.(1:2); pg 12: @41:12 exercise 9.3; 

This is the tenth lecture in this series on Linear Algebra by N J Wildberger. In this lecture we discuss parametric and Cartesian equations of lines and planes in 3 dimensional affine space. We start by reviewing lines in 2D. A novel feature is the description of all such lines as a Mobius band. For lines and planes in 3D, we avoid the use of inner products and cross products, using determinants instead.
CONTENT SUMMARY: pg 1: @00:08 review of lines in 2 dimensions; cartesian equation of a line; pg 2: @03:13 parametric equation of a line; pg 3: @05:33 example: finding the meet of 2 lines; pg 4: @09:07 example: same problem as previous page with lines being described in parametric form; pg 5: @13:07 special lines in the 2dimensional case; the x and y axes, and lines parallelel to the x and y axes; pg 6: @14:41 pencils and stacks; pg 7: @16:52 question: What does the (space of all lines) look like?; topologically gluing a line to every point on a circle; pg 8: @20:53 cylinder; Mobius band; pg 9: @26:59 lines and planes in 3D; planes; cartesian equation of a plane; pg 10: @30:20 solving a system of equations in 3D; matrix of determinants of minors; pg 11: @35:50 lines in 3D; two points, point and vector, intersection of 2 planes; parametric equation; pg 12: @39:05 line in cartesian and parametric form; cartesian form describes 2 planes that meet in a line; pg 13: @43:02 examples; pg 14: @47:18 meet of two planes; method found in very few linear algebra texts; a way of introducing parameters; (THANKS to EmptySpaceEnterprise) 

This is the 11th lecture in this course on Linear Algebra by N J Wildberger. Here we talk about 3x3 matrices and their applications to linear transformations of three dimensional space. This includes dilations, reflections, rotations with plenty of examples.
CONTENT SUMMARY: pg 1: @00:08 matrix/vector multiplication; Two interpretations: linear transformation/Change of coordinates; active vs passive approach; pg 2: @04:00 linear transformation approach; example; columns of transformation matrix are the 3 basis vectors transformed; pg 3: @07:22 Identity transformation; dilations (scales the entire space); dilations are a closed system under composition and addition; remark on diagonal matrices and rational numbers; pg 4: @11:09 mixed dilations; Mixed dilations are also a closed system under composition and addition; pg 5: @14:15 examples; (easy) reflections; reflection in a plane; reflection in a line; pg 6: @16:43 examples: (easy) projections; projection to a plane; projection to a line; pg 7: @19:06 examples: (easy) rotations; pg 8: @22:44 Rational rotations; halfturn formulation; pg 9: @25:36 parallel projection of a vector (u) onto a plane at arbitrary projection direction (l); pg 10: @29:10 The parallel projection matrix; projection properties; pg 11: @31:15 projection example continued; projecting (u) onto the line (l); remark that the resulting matrix is rank 1; pg 12: @35:48 A general reflection in a plane; pg 13: @39:40 A general reflection in a line; pg 14: @42:38 response of the general formulas in the case of perpendicular projection and reflection; introducing the notion of perpendicularity; the normal vector to a plane is read off as the coefficients of x,y,z in the cartesian formula of the plane; pg 15: @46:26 revisit of the general formulas; the quadrance of the vector mentioned @48:20 ; remark on the benefits of abstraction @49:17 ; pg 16 @51:11 exercises 11.(1:2) ; 

This is 12th lecture in this course on Linear Algebra by N J Wildberger. Linear algebra is all about different perspectives. Here we compare Bob and Rachel's coordinate system and learn how to change from one basis to another. Then we define similar matrices and generalized dilationsthose linear transformations that are similar to a (mixed) dilation, or can be diagonalized. We look at an example to sketch the basic idea.
This leads to the second most important problem in the subject: how to find the eigenvectors and eigenvalues of a matrix. CONTENT SUMMARY: pg 1: @00:08 introducing the 2nd most important problem in linear algebra; 2 frames of reference; desire to compare frames of reference; example using "Bob" and "Rachel" basis vectors; pg 2: @06:27 example of vector change of basis; going back and forth between "Bob's" and "Rachel's" systems; pg 3: @09:34 notation to facilitate change of basis conversation; ordered bases, coordinate vectors, change of basis matrix; change of basis matrices are inverse matrices; pg 4: @14:11 process to obtain change of basis matrix; examples to verify agreement to earlier results; pg 5: @16:47 how linear transformations appear when going from one frame of reference to another frame of reference; start with an example; pg 6: @22:29 example continue; the same linear transformation expressed in different frames of reference; the transformation is much more easily expressed in "Rachel's" system; pg 7: @26:20 Definition of similar matrices; Similar matrices represent the same linear transformation but with respect to (w.r.t.) different bases; example; the similarity relation is symmetric; pg 12: @38:45 definitions for eigenvector and eigenvalue; example; pg 13: @42:54 example of finding eigenvector's and associated eigenvalue's in the 2x2 case; the characteristic equation; pg 14: @48:15 example continued; the eigenvalues make the associated matrix notinvertible; the previous as a fundamental derivation; pg 15: @51:56 exercise 12.1; pg 16: @53:14 exercises 12.(2:4) ; remark on a problem encountered in the previous solution @54:00; 

This is the 13th lecture in this course on Linear Algebra. Here we start studying general systems of linear equations, matrix forms for such a system, row reduction, elementary row operations and row echelon forms.
This course is given by Assoc Prof N J Wildberger of UNSW, who also has other YouTube series, including WildTrig, MathFoundations and Algebraic Topology. CONTENT SUMMARY: pg 1: @00:08 How to solve general systems of equations; Chinese "Nine chapters of the mathematical art'/C.F.Gauss; row reduction; pg 2: @03:04 General set_up: m equations in n variables; Matrix formulation; matrix of coefficients; pg 3: @05:50 Defining the product of a matrix by a column vector; 2 propositions used throughout the remainder of course; matrix formulation of basic system of equations; pg 4: @09:07 return to original example; Linear transformation; pg 5: @10:49 a 3rd way of thinking about our system of linear equations; vector formulation; example; pg 6: @14:12 example: row reduction (working with equations); pg 7: @24:48 example: row reduction (working with matrices); row echelon form mentioned; reduced row echelon form; setting a variable to a parameter; pg 8: @30:17 Terminology; augmented matrix, leading entry, leading column, row echelon form; pg 9: @32:07 examples; solution strategy; pg 10: @35:36 elementary row operations; operations are invertible (can be undone); algorithm for row reducing a matrix; pg 11: @38:11 algorithm for row reducing a matrix; pivot entry; pg 12: @43:41 example; row reducing a matrix per algorithm; pg 13: @47:38 exercises 13.(1:2); pg 14: @48:02 exercise 13.3; 

This video explains the second half of row reduction, a basic algorithm in linear algebra used to solve systems of linear equations. Parameters are introduced corresponding to nonleading columns of the augmented matrix of the system.
We apply this to the problem of writing one vector as a linear combination of others, both in the two dimensional and three dimensional situations. This is the 14th lecture of this course on Linear Algebra given by Assoc Prof N J Wildberger. CONTENT SUMMARY: pg 1: @00:07 row reduction using example; get all leading entry values to 1; fully reduced row echelon form; glorified highschool algebra done very systematically; pg 2: @04:56 example 2 (2 parallel lines in a plane); no solution; example 3 (3 equations, 3 variables); back substitution; pg 3: @10:20 definition: Fully reduced row echelon form; examples; pg 4: @12:40 examples; obtaining fully reduced row echelon matrices; pg 5: @15:48 Parameters; example (a line); parametric solution; pg 6: @19:28 example (a plane); parametric form for a plane: point on plane and 2 direction vectors; pg 7: @22:39 example (intersection of 2 planes in 3D); pg 8: @26:33 why introduction of parameters works; pg 9: @30:32 problem1: writing a vector as a linear combination of 3 other vectors; important point: a system can be thought of in different terms; pg 10: @33:19 problem1 continued; pg 11: @38:25 problem2: writing 2d vector as a linear combination of 3 other 2d vectors; pg 12: @41:06 problem2 continued; pg 13: @44:06 exercise 14.1; pg 14: @45:02 exercise 14.2; pg 15: @46:51 exercise 14.3; 

This lecture shows how the three main problems of Linear Algebra can be tackled using the algorithm of row reduction, also called Gaussian elimination. The three main problems are: how to invert a linear change of coordinates, how to compute the eigenvalues and eigenvectors of a square matrix, and how to compute the determinant of a square matrix. Each problem is illustrated with examples.
This is one of a series of lectures maing a first course in Linear Algebra, given by Assoc Prof N J Wildberger of UNSW, also the discoverer of Rational Trigonometry. CONTENT SUMMARY: pg 1: @00:08 3 main problems of Linear Algebra; pg 2: @01:51 Inverting a linear change of coordinates; example; pg 3: @05:19 example finished; new idea: introduce a ysubi matrix; to obtain the inverse of a matrix; pg 4: @09:17 Theorem concerning an invertible matrix; pg 5: @11:00 Finding eigenvalues and eigenvectors of an nXn matrix; remark about the Homogeneous case; g 6: @14:23 The eigenvalue problem using row reduction; example1; check of result @19:34; pg 7: @20:34 example2 as a reminder of the physical meaning of an eigenvector equation (see WildLinAlg7); pg 8: @24:38 example2 continued; finding the eigenvectors using row reduction; perpendicular eigenvectors; pg 9: @27:52 How to calculate a determinant; characteristics of a determinant; as the volume of a parallelpiped; properties of a determinant necessary to do row reduction @30:20; pg 10: @31:32 the determinant of an upper triangular matrix; pg 11: @35:11 example: putting a matrix in upper triangular form to obtain its determinant; remark about this lesson @38:47 ; pg 12: @39:33 exercises 15.(1:2) ; invert some systems using row reduction; find inverse matrices; pg 13: @40:17 exercises 15.(3:4) ; find eigenvalues and eigenvectors; compute determinants; 

This video looks at applications of row reduction to understanding linear transformations between two and three dimensional space. It also introduces the important notions of a spanning set and a linearly independent set of vectors.
This is the16th lecture in this series on Linear Algebra by N J Wildberger. CONTENT SUMMARY: pg 1: @00:08 More applications of row reduction; How row reduction helps us to understand some interesting aspects of linear transformations; not necessarily working with square matrices; example; pg 2: @04:05 A transformation where the input space and the output space are not necessarily the same; example: what happens to the basis vectors under transformation?; pg 3: @8:56 analyse previous example using row reduction; the equation of the image plane is obtained by row reduction; pg 4: @14:55 example2: a linear transformation from 3d to 2d; pg 5: @16:53 example2 continued; analysed using row reduction; importance of mapping the basis vectors @19:46 ; some vectors are sent to zero in the transformation; pg 6: @23:02 example continued; row reduction; pg 7: @26:41 Spanning sets; examples; a unique linear combination; pg 8: @30:28 Spanning sets continued; examples; pg 9: @33:35 use of row reduction to determine whether we have a spanning set; pg 10: @39:19 Spanning sets continued; example2; not a spanning set; pg 11: @41:35 spanning sets continued; 2d space; pg 12: @43:32 linearly independent sets of vectors; examples; a linearly dependent set; pg 13: @45:42 linear independence/dependence continued; examples; a set containing the zero vector; pg 14: @48:05 linear dependence continued; example1; pg 15: @49:42 example1 continued using row reduction; pg 16: @51:14 linear dependence continued; example2; pg 17: @52:54 example2 continued; pg 18: @54:10 exercise 16.1; pg 19: @54:41 exercises 16.23; pg 20 @55:42 exercises 16.45; pg 21 @56:09 exercises 16.67; 

This is a full hour lecture in which we step up to linear transformations with spaces of more than 3 dimensions, introduce the kernel and the image properties, and the corresponding dimension numbers called nullity and rank.
Most of the lecture looks in detail at a particular transformation from four to three dimensional space. We discuss how to visualize four dimensions in a way that is consistent with our pictures of two and three dimensions. The main computations rest on our understanding of row reduction of a matrix. CONTENT SUMMARY: pg 1: @00:08 Lesson about nullity and rank of a linear transformation; kernel and image of linear transformation; general linear transformations; Example (mxn is 3x4); pg 2: @03:16 How to visualize in higher dimensions; shift from affine space to vector space; points/vectors; pg 3: @09:17 4dimensional space (algebraically); pg 4: @11:00 4dimensional space (geometrically); pg 5: @16:52 linear transformation from 4dim to 3dim; Kernel and image of transformation as fundamental; nullity as dimension of the kernel; rank as dimension of the image; pg 6: @23:05 Definition of kernel vector; kernel property; pg 7: @24:15 Finding vectors with the kernel property for a transformation using row reduction; pg 8: @28:32 Definition of image vector; image property; pg 9: @30:17 at least the columns of the transformation matrix have this image property; pg 10: @32:57 Finding vectors with the image property for a transformation using row reduction; pg 11: @37:11 Another approach to the image of a transformation; the column space of a matrix; pg 12: @40:37 The whole picture; kernel,image, nullity, rank; pg 13: @43:21 Important observations; pg 14: @45:51 relationship between the nullity and the rank; RankNullity theorem; pg 15: @48:27 example: Linear transformation from 3dim space to 4dim space; kernel, image, rank, nullity; pg 16: @50:12 example continued; kernel; pg 17: @52:43 example continued; image; remark on relationship of columns in row reduction @53:07; pg 18: @55:05 example summary; pg 19: @58:21 exercises 17.(1:2); kernel property, image property; pg 20: @59:03 exercise 17.3 ; describe ker() and im(); 

To a system of m equations in n variables, we can associated an m by n matrix A, and a linear transformation T from n dim space to m dim space. The kernel and rank of this transformation give us geometric insight into whether there are solutions, and if so what the solutions look like.
This video introduces subspaces of a general linear space, but does so in a rather unorthodox mannermore logically secure than the usual. Instead of talking idly about ''infinite sets'' which we have no hope of specifying, we talk rather about ''properties'', which fits more naturally with modern computer science. So while we do standard linear algebra, we approach it with a highly novel conceptual framework. This is discussed (or will be) at greater length in my MathFoundations series. As usual, the discussion is brought down to earth by a careful look at some illustrative examples. This is a long lecture (more than an hour) so take it slowly. CONTENT SUMMARY: pg 1: @00:08 The geometry of a system of linear equations; m linear equations in n variables; pg 2: @01:39 The picture to keep in mind; The big picture; pg 3: @05:00 The kernel property; property versus set; remark on fundamental issue @06:15 (see "math foundations" series); pg 4: @10:10 examples; what is a line?; what is a circle; properties instead of infinite sets; pg 5: @14:21 managing properties; statement of Properties moral; pg 6: @15:49 examples; properties of a 3d vector; pg 7: @17:49 Subspace properties; Definition and examples; pg 8: @21:09 subspace properties of 2d vectors; pg 9: @22:46 subspace properties of 3d vectors; pg 10: @26:15 Definition of kernel property; definition of image property; Theorem 1; Theorem 2; pg 11: @27:54 Theorem proofs; pg 12: @32:05 subspaces in higher dimensional spaces; spanning set; equation set; hyperplane @36:28 ; pg 13: @37:43 Linear transformation ndim to mdim; pg13_Theorem ; pg 14: @42:18 proof of pg13_Theorem; pg 15: @46:06 example (2d to 2d); pg 16: @51:48 example (3d to 2d); pg 17: @58:03 example (3d to 3d); pg 18: @1:03:13 example continued; remark: typifies a linear transformation @01:05:20; pg 19: @1:05:37 exercise 18.1; pg 20: @1:06:20 exercise 18.2; 

Spaces of polynomials provide important applications of linear algebra. Here we introduce polynomials and the associated polynomial functions (we prefer to keep these separate in our minds).
Polynomials are vital in interpolation, and we show how this works. Then we explain how regression in statistics (both linear and nonlinear) can be viewed using our geometric approach to a linear transformation. Finally we discuss the use of `isomorphism' to relate the space of polynomials up to a certain fixed degree to our more familiar space of column vectors of a certain size. CONTENT SUMMARY: pg 1: @00:08 Linear algebra applied to polynomials; polynomials; pg 2: @03:33 a general polynomial; associated polynomial function; example; pg 3: @07:35 importance of polynomial functions; pg 4: @10:37 Interpolation; pg 5: @12:23 finding a polynomial going through one point/two points; example; pg 6: @14:44 example continued; pg 7: @18:11 example (find the line through 2 points); pg 8: @20:47 (find the polynomial through 3 points); Vandermonde matrix @22:40 ; the pattern @24:11; pg 9: @25:02 Regression (statistics); looking for an approximate solution; pg 10: @26:59 Regression continued; pg 11: @30:09 Linear regression; remark on the power of linear algebra @32:39; pg 12: @33:04 Spaces; the connection between polynomials and linear algebra; operations; similarity of polynomials and vectors; pg 13: @35:48 trying to say this object is like this object; mapping: start out with a polynomial and end up with a vector of coefficients @37:24 ; isomorphism; vector of coefficients; bijection @38:07 ; surjective; injective; pg 14: @40:46 connection between functions and an abstract 3d vector space; pg 15: @43:36 Exercises19.13; pg 16: @44:51 Exercise 19.4; (THANKS to EmptySpaceEnterprise) 

This lecture studies spaces of polynomials from a linear algebra point of view. We are especially interested in useful bases of a four dimensional space like P^3: polynomials of degree three or less. We introduce the standard (or power) basis, also the modified Factorial basis. Translations of the corresponding functions yield linear transformations, giving Taylor bases and a purely algebraic definition of the derivative. We see that some basic calculus ideas are really algebraic in nature, not requiring `real numbers', limits or slopes of tangents.
This course in Linear Algebra is given by N J Wildberger. CONTENT SUMMARY: pg 1: @00:08 Map of a space of polynomials to a space of vectors; Definition: linear/vector space; course distinction @03:27 ; pg 2: @04:46 Definition: An ordered basis of a linear space; Example1 ; pg 3: @07:03 Example2 (basis of a vector space); Example3 (basis of a polynomial space); Definition: Dimension of a linear/vector space; examples; pg 4: @09:25 The space of polynomials is richer than the isomorphic space of vectors; translating polynomials @09:50 ; degree of the polynomial is preserved in translation; pg 5: @13:20 Study01 of translation by 3 (see previous page); A linear transformation @16:15 ; pg 6: @16:48 study01 continued; image and kernel of polynomial of degree 3 @17:16; pg 7: @19:04 the derivative of a function appears in translation; pg 8: @22:49 Definition: The derivative of a polynomial; calculus via linear algebra @23:00; pg8_Theorem; factorial notation; pg 9: @28:05 Calculus as algebra; pg9_Theorem (product rule); proof (Leibniz mentioned); pg 10: @33:46 Translating a polynomial and obtaining the derivatives; Taylor series mentioned; pg 11: @37:58 Importance of various bases; standard basis; factorial basis; Example ; coefficient vectors of a polynomial with respect to a basis; pg 12: @42:10 Theorem (Basis isomorphism correspondence); The standard vector space of column vectors @45:30 ; pg 13: @46:22 The derivative as a linear transformation; remark about formulas in calculus and combinatorics @50:46 ; pg 14: @51:13 Another basis; the standard basis moved over by 3; example; pg 15: @54:32 Change of basis matrix; How to get this matrix! @54:56 ; pg 16: @57:16 Exercises 20.13; closing remarks @58:27; 

Polynomial spaces are excellent examples of linear spaces. For example, the space of polynomials of degree three or less forms a linear or vector space which we call P^3. In this lecture we look at some more interesting bases of this space: the Lagrange, Chebyshev, Bernstein and Spread polynomial basis. The last comes from Rational Trigonometry.
This is one of a series on Linear Algebra given by N J Wildberger of UNSW. CONTENT SUMMARY: pg 1: @00:08 Introduction review; polynomials of degree 3; Lagrange, Chebyshev, Bernstein, spread polynomials; basis: standard/power, factorial, Taylor; Lagrange polynomials developed 02:18; pg 2: @03:53 Lagrange developement continued; evaluation mapping; pg 3: @06:13 Lagrange developement continued; polynomials that map to the standard basis vectors e1,e2,e3,34 (Lagrange interpolation polynomials); pg 4: @09:12 Lagrange basis; Polynomial that goes through four desired points; pg 5: @11:39 Uniform approximation and Bernstein polynomials pg 6: @13:42 reference to Pascal's triangle; Bernstein polynomials (named); Bernstein basis; pg 7: @16:37 view of Bernstein polynomials; pg 8: @18:06 Show that Bernstein polynomials of a certain degree do form a basis for that corresponding polynomial space; Pascal's triangle; Unnormalized Bernstein polynomials; WLA21_pg8_theorem (Bernstein polynomial basis); pg 9: @21:04 How Bernstein polynomials are used to approximate a given continuous function on an interval; pg 10: @24:13 Chebyshev polynomials; using a recursive definition; Chebyshev polynomial diagram; pg 14: @36:56 Spread polynomials relation to Chebyshevs; Spread polynomials advantage over Chebyshev; Pascal's array; Spread polynomials as a source of study @39:17; pg 15: @39:43 Spread basis; change of basis matrices; moral @42:15 ; pg 16: @42:40 exercises 21.14 ; pg 17: @43:44 exercises 21.57 ; closing remarks @44:38 

Polynomials can be interpreted as functions, and also as sequences. In this lecture we move to considering sequences. Aside from the familiar powers, we introduce also falling and rising powers, using the notation of D. Knuth. These have an intimate connection to forward and backward difference operators. We look at some particular sequences, such as the square pyramidal numbers, from the view of this `difference calculus'.
CONTENT SUMMARY: pg 1: @00:08 polynomials and sequence spaces; remark about expressions versus objects @03:27 ; pg 2: @04:24 Some polynomials and associated sequences; Ordinary powers; Factorial powers (D. Knuth); pg 3: @10:34 Lowering (factorial) power; Raising (factorial) power; connection between raising and lowering; all polynomials @13:28; pg 4: @13:52 Why we want these raising and lowering factorial powers; general sequences; Online encyclopedia of integer sequences (N.Sloane); 'square pyramidal numbers'; Table of forward differences; pg 5: @19:23 Forward and backward differences; forward/backward difference operators on polynomials; examples: operator on 1 @23:07; pg 6: @23:38 Forward and backward differences on a sequence; difference below/above convention; pg 7: @27:21 Forward and backward Differences of lowering powers; calculus reference @29:37; pg 8: @31:27 Forward and backward Differences of raising powers; operators act like derivative @34:45 ; n equals 0 raising and lowering defined; pg 9: @36:17 Introduction of some new basis; standard/power basis, lowering power basis, raising power basis; proven to be bases; pg 10: @39:23 WLA22_pg10_Theorem (Newton); proof; pg 10b: @44:40 Lesson: it helps to start at n=0; example (square pyramidal numbers);an important formula @47:47; pg 11: @50:00 formula of Archimedes; taking forward distances compared to summation @52:46 pg 12: @53:20 a simpler formula; example: sum of cubes; pg 13: @57:38 exercises 22.14; pg 14: @59:06 exercise 22.5; find the next term; closing remarks @59:50; 

This video begins to lay out proper foundations for planar Euclidean geometry, based on arithmetic. We follow Descartes and Fermat in working in a coordinate plane, but a novel feature is that we use only rational numbers.
Points and lines are the basic objects which need toWhen we interpret polynomials as sequences rather than as functions, new bases become important. The falling and rising powers play an important role in analysing general sequences through forward and backward difference operators. The change from rising powers to ordinary powers, and from ordinary powers to falling powers give rise to two interesting families of numbers, called Stirling numbers of the first and second kind. We use Karamata notation, advocated by Knuth to describe these: brackets and braces. Combinatorial and number theoretic interpretations are mentioned. We discuss the important relation between two bases of a a linear space and the corresponding change from one kind of coordinate vector to another. This is applied to study general polynomial sequences. This lecture is not easy, and represents a high point of this course in Linear Algebra. However it introduces powerful and common techniques which are actually quite useful in a variety of practical applications. CONTENT SUMMARY: pg 1: @00:08 Intro: (Stirling numbers and Pascal triangles); sequences; change of terminology @00:44 ; falling power; rising power; list of rising powers; summation notation and Stirling numbers @03:00; pg 2: @04:55 James Stirling (1749), "Methodus Differentialis"; Stirling number notation warning @05:04 ; 'n bracket k' as Karamata notation (Knuth); Stirling numbers of the first kind; Change of basis rewritten from pg 1 @05:29 ; Stirling matrix of the first kind; remark about unconventional indexing of Stirling numbers @06:36; pg 3: @07:13 Calculating Stirling numbers; Theorem (Recurrence relation: Stirling numbers); proof; pg 4: @10:56 Pascal's triangle and binomial coefficients; recurrence relation for binomial coefficients; Pascal matrix; pg 5: @14:08 Combinatorial interpretation of Sterling numbers; pg 6: @17:34 Number theoretic interpretation of Sterling numbers; summary of Sterling number interpretation @21:50; pg 7: @23:09 Sterling numbers of the 2nd kind; Inverting the Pascal matrices; pg 8: @26:36 Inverting Stirling matrices; reintroduction of some ignored symmetry @27:48 ; Sterling matrix of the 2nd kind; pg 9: @30:41 Definition of Stirling numbers of the second kind; 'n brace k' notation of Stirling numbers of the 2nd kind; Sterling matrix of the 2nd kind; pg 10: @32:54 Combinatorial interpretation of Sterling_numbers_2nd_kind ; Theorem (Recurrence relation for Sterling_numbers_2nd_kind); pg 11: @35:54 Statement of the importance of the Sterling numbers; important question @37:23 ; suggestion to review starting WLA1_pg7 @40:27; pg 12: @40:48 Of primary importance to problems of practical application; Non_standard ideas; This is at the heart of change of basis @47:08; pg 13: @47:26 Transpose a matrix and vector; pg 14: @50:11 Application of this (effect of change of basis on coordinate vectors): analyse a polynomial sequence; Newtons formula; A very useful thing to be able to do @53:54; pg 15: @54:44 General C: transpose of signed Stirling matrix fo 1st kind; pg 16: @55:30 Exercises 23.13; pg 17: @56:13 Exercises 23.45; closing remarks @57:14; 

We introduce de Casteljau Bezier curves by combining linked convex/affine motions along line segments. This way of describing and specifying curves was introduced around 1960 by these two engineers, aiming to find a better way to capture curves for design work (originally for automobiles, but now these are used everywhere in design work).
Surprisingly elegant and simple algebraic formulas for these curves can be found using the Bernstein polynomials we talked about in WLA21. We concentrate on the cubic case, although analogs for higher degree curves, and also surfaces, are not hard to generate. 

In our last video, we talked about de Casteljau Bezier curves, mostly cubics, for design work. In this lecture we discuss another application of cubic splinesto the interpolation problem: finding a smooth curve passing through a finite number of points in the (x,y) plane.
Our approach to this question is somewhat novel, and focusses on the use of what we call Taylor coefficient vectors. A given cubic polynomial in our space P^3 has a 4vector of Taylor coefficients at any point, and the relations between two such Taylor vectors is given by a linear transformation, essentially a Pascal matrix (see WLA23). So our strategy is to create the cubic spline one segment at a time, transferring the knowledge of the Taylor coefficient vector at one endpoint to the other. Although we are using calculus ideas, we develop them independently, so the viewer is not required to have had prior knowledge of calculus. . 

In this lecture we put our previous discussion of coordinate vectors and change of basis into a more general and novel framework, and look at an important application to calculus.
We introduce the new idea of a vatrix: a matrix whose entries are themselves vectors. This concept allows us to encode the basis of a linear/vector space, such as P^3, as a vector, and linear combinations as products of row and column vatrices. Coefficient vectors now become row vectors, and change of basis matrices have a logical and intuitive labelling. Our main example goes right back to our first lecture with Bob and Rachel's two bases for the affine plane. Then we look at Taylor coefficient vectors for a polynomial p in P^3. For every rational point c there is a Taylor basis, with an associated vatrix of powers of (alphac). The crucial change of basis matrices are generalized Pascal matrices which enjoy lovely algebraic propertiesthey form a oneparameter group of matrices. In fact this whole theory has natural connections with the representation theory of sl(2), which we do not mention. This is the final lecture in this first half of this course on Linear Algebra. 

This is the first video of Part II of this course on linear algebra, and we give a brief overview of the applications which we will be concentrating on.
The first topic will be the connections between linear algebra and Euclidean and other geometries. Linear algebra provides an excellent framework for geometry, allowing Euclid's axiomatic approach to be replaced by logically more solid definitions and proofs. However for this to work seamlessly, a more algebraic approach than found in most texts will be here adopted. We will use ideas from Rational Trigonometry, and the dot product (or inner product) will play a central role. We motivate these developments by going back to Euclid's understanding of mathematic's most important theorem: Pythagoras' theorem, and the intimate connection with the notion of perpendicularity. 

Here we begin to study metrical geometry from the framework of linear algbra, but we do so in a novel, completely algebraic way. The starting point is the dot product, motivated by Pythagoras' theorem but logically independent of any prior understanding of Euclidean geometry.
The main properties of the dot product are that it is bilinear and symmetric. From the dot product we define perpendicularity of vectors, and the quadrance of vectors, which replaces the `length' and is a purely algebraic quantityso much more general, accurate and allowing us to work over the rational numbers as usual. Pythagoras' theorem then gets a completely algebraic proof, which is quite fundamental. This topic should be a cornerstone of all undergraduate linear algebra courses (and of course in the future it will be!) Finally we introduce the idea of more general symmetric bilinear forms, associated to symmetric 2x2 matrices. These will allow us to extend much of this discussion, in particular to understanding the relativistic geometry introduced a century ago by Einstein and Minkowski. 

The dot product, or inner product, is the main source of metrical structure for planar Euclidean geometry when we work in the framework of linear algebra. In this video we show how it leads to the idea of a linear functional, how it gives us a direct and geometrical understanding of lines via normal vectors, and how to work with projections.


We continue to discuss aspects of planar geometry related to the dot product. This includes the idea of a linear functional, the pointnormal form of a line and normal vectors. We derive the formula for the projection of one vector onto another vector, and the quadrance of a point to a line.
We introduce isometries, and show that reflection in a line perpendicular to a vector is an isometry. The product of two reflections is called a rotation. 

Circles are fundamental geometrical objects that fit naturally into a linear algebra framework using the dot product. For simplicity we discuss circles with center the origin O in this lecture.
Interestingly, there are (at least!) two different ways to introduce the circle from the Euclidean dot product: in terms of a quadratic equation v.v=k or in terms of a bilinear equation: v_1.v_2=k. The second is much less familiar, but has more power to explain tangents and to appreciated Apollonius' theory of pole/polar duality, which is expressed very simply and naturally in the language of linear algebra. It also allows us to consider not only the example of the usual unit circle, but also the imaginary unit circle, which plays a surprisingly big role in geometry! We finish the lecture with some classical theorems about circles which are fundamental for projective geometry. 

We introduce the Euclidean dot product in threedimensional space, and then quadrance between points, linear functionals and planes, projections, Pythagoras' theorem and perpendicularity in a parallel way to what we did in two dimensions. A sphere (centered at the origin) is then either a quadratic equation of the form v.v=k, or a bilinear equation of the form v1.v2=k.


We introduce the planar relativistic dot product which underlies Einstein's Special theory of Relativity (SR). This is a small variant on the usual Euclidean dot product (a plus sign is replaced with a minus sign) and there are both important similarities and important differences between the two.
In this video we show that many of the standard geometrical ideas that we discussed in the Euclidean setting hold also in this relativistic case. This includes the notion of quadrance, Pythagoras' theorem, linear functionals, equations of lines, projections and circles (which appear to us as particular rectangular hyperbolas). A nice reference for SR is the YouTube series produced by Joe Wolfe from UNSW at www.phys.unsw.edu.au/einsteinlight (in my video I forgot the .au). 

Three dimensional relativistic geometry is a natural extension of the 2D situation we studied in our last video. We have a look at the new features involving hyperboloids of one and two sheets acting as relativistic spheres.
Surprisingly, this relativistic geometry has a natural and intimate connection with oriented planar circles, also called signed circles, or directed circles, or cycles. The relativistic quadrance has an interpretation as the quadrance between corresponding oriented circles, measured along a common oriented tangent. This theory was developed in the 19th century and is called cyclography, but it is not as wellknown these days as it ought to be. 

We continue our discussion of oriented, or signed, or directed circles in the plane, which are also called cycles, and the intimate connection with relativistic geometry in three dimensions. This correspondence makes it easier for us to apply linear algebraic ideas to the geometry of circles, but it also provides an interesting geometrical interpretation of the framework of Einstein's Special Relativity. This theory was developed in the 19th century and is called cyclography, but it is not as wellknown these days as it ought to be, and benefits from a purely algebraic development.
In this lecture we talk about the homothetic centre of two circles (sometimes also called a center of similitude), how to find this, and then describe a lovely theorem of G. Monge on the homothetic centres of three circles. Tangency of signed circles has a natural interpretation of the relativistic quadrance being zero, and we show how to locus of a circle tangent to two given ones has a relativistic meaning as an intersection of two cones, yielding a hyperbola of centres in the original plane. The famous problem of Apollonius makes its appearance. We describe how spheres in the relativistic space, which appear as hyperboloids of one and two sheets, can be viewed from the point of view of circle geometry, involving interesting pencils of circles. Finally we give a perhaps new theorem that describes the geometrical meaning of the relativistic quadrance between two signed circles. This lecture has a lot of material in it, so go slowly! 

We introduce some elementary mechanics relating to conservation of momentum and energy using linear algebra. For simplicity we work in a one dimensional situation, but introduce a two dimensional spacetime to interpret (elastic) collisions geometrically.
We obtain a pleasant butterfly collision diagram which explains what happens when two particles of different masses and speeds collide. 

We give a nonstandard introduction to the basic framework of Einstein's Special theory of Relativity (SR), which avoids various assumptions that are usually introduced (often without either explanation or justification). We show that SR is a consequence of simple Newtonian mechanics when we replace the idea of an inertial frame with the idea of an inertial observer.
The discussion is introduced via a onedimensional Newtonian situation involving bats using echo location to detect events! 

Here we continue with our novel explanation of Einstein's Special Relativity (SR), in which we consider a high school Newtonian 1dimensional world populated by bats, who use sound signals and echolocation to determine the position and times of events around them.
We develop the fundamental Lorentz transformations that move from one bat's coordinates to another bat's coordinates, assuming they have a constant relative motion. An important ingredient is a rational parametrization of the hyperbolas that appear. 

We continue deriving consequences of our novel simple minded way of introducing Einstein's Special Relativity (SR), purely in a 1dimensional Newtonian situation considering bats whoc use sound signals to measure positions and times of events around them.
In this lecture we use the Lorentz transformations we derived last time to explain how length contraction, time dilation and Einstein's velocity addition follow simply. This goes back to a simple change of basis argument, which is at the heart of linear algebra. 

From our previous investigations into relativistic geometry, we formally introduce the relativistic (or red) dot product in two dimensional space. This is in contrast with the familiar Euclidean (or blue) dot product. There is also a third dot product, another relativitistic one, which we call green.
This leads to a remarkable three fold symmetry into planar geometry, called Chromogeometry. The story is all contained inside the algebra of 2x2 matrices. We discuss also unit circles and isometries in the context of all three geometries simultaneously. 

We continue discussing the triality of blue, red and green dot products, and introduce associated complex numbers. These are nicely organized inside the algebra of 2x2 matrices. Chromogeometry is the study of the interactions between these three geometries.


The threefold symmetry of chromogeometry, involving one Euclidean and two relativistic geometries (blue, red and green), algebraically takes place inside the 2x2 matrices. This is a vector space with a multiplication, which becomes an algebra (associative with identity is included in our definition).


[Second of two parts] We address a core logical problem with modern mathematicsthe usual definition of a `function' does not contain precise enough bounds on the nature of the rules or procThe threefold symmetry of chromogeometry, involving one Euclidean and two relativistic geometries (blue, red and green), algebraically takes place inside the 2x2 matrices. This is a vector space with a multiplication, which becomes an algebra (associative with identity is included in our definition).
We discuss the idea of a subalgebra of an algebra, in particular subalgebras of the algebra of 2x2 matrices. The red and green analogs of complex numbers, which we call complexions, are important examples. The chromatic algebra of 2x2 matriceswhich we now call the Dihedronssupports a remarkable dot product which comes from the fact that the determinant is in this case (and only in this case!) a quadratic object. Remarkably this restricts to the three blue, red and green complexions to be the associated blue, red and green dot products on these. We finish by giving a picture of this four dimensional algebra, something quite similar to the story of quaternions that we discussed in our Famous Math Problems 13 series. [This video is a reposting of an earlier one, with some editing problems fixed.]edures (or computer programs) allowed. Here we discuss the difficulty in the context of functions from natural numbers to natural numbers, giving lots of explicit examples. WARNING: this video and the last one destabilizes much of the mathematics taught in universities. 
