Let’s Revise our Vector Spaces

Amit Singh Bhatti
5 min readApr 28, 2019

--

Well, i have been out of touch with writing articles and thought , lets get back to it and this time starting with scratch by revising concepts and then moving up to real application of these mathematical tools which is so called real world “artificial intelligence”.

This article is also more of me flaunting about my recent notes making with ipad.Yeah, honestly.. but yeah .Now i am going to be consistent with article starting with this one brushing up basics of linear algebra and then moving forward to multivariate calculus and then moving onto traditional ML and then DL and then to the applications.
My articles will be more mathematics oriented than coding, because for code , you can find n number of repos by people. My articles will be focused on how it is working.

Let’s start by defining the way in which we represent numbers :
* Scalars — single number
* vectors — one dimensional array
*matrices — 2D array
* Tensor — Multi dimensional array

Certain operation we can perform :
* scalar + scalar — scalar
* Scalar + matrix — matrix (broadcast)
* Multiply scalar with matrix — matrix(broadcast)
* Matrix + vector (each row) — matrix
* Matrix (A)* Matrix(B) (each row ) — matrix (num cols. of A = num of rows. of B)
* Element wise product — hadamard product
* Distributive property : A(B+C) = AB + AC
* Associative property : A(BC) = (AB)C
* Commutative : AB != BA
* Dot product between two matrices is commutative though

Linear algebra is an art of mathematics used to solve the system of equations. But it has more to do with how we think of vector spaces and how finding a solution of the equation means finding a vector in a vector space spanned by a matrix. In this pic, you can clearly see , A is a matrix, X is a vector and B is a vector.
What does the above matrix and vectors represent?
Is it possible to have a vector X in the vector space spanned by the Matrix A which is equal to the vector B.Finding a solution means finding a vector X = B in vector space of A.
What is the vector space of A ?
It is the combination of all the vectors to be present in the columns of Matrix A called the column space of A .
Similarly,the combination of all the row entires in Matrix A defines row space of A.
Rank
of a matrix is the number of linearly independent columns of in a matrix.If a matrix has all columns as linearly independent then its called a full Rank matrix.

What is a Matrix by the way ??

It is actually the transformation of the vector space. Here we see a vector v which has been transformed by applying a matrix. Matrix basically changes your basis vectors if one or more basis vector dont change after the matrix transformation , it is actually an eigen vector(will get to it in some time).
Here in the diagram the blue lines represent the original vector space spanned by basis vectors [1,0] and [0,1].The red lines represent the transformed vector space with basis vector as [1,-2] and [3,0].
Matrix is basically a transformation.

IDENTITY AND INVERSE MATRICES
There is a reason why i have put these 2 matrices together under one heading.
*Identity matrix — does not change a vector at all when identity matrix is applied to it. It is a square matrix with diagonal values as 1.
*
Inverse Matrix — In physical terms , it is actually a matrix which inverts or reverses the effect of a matrix transformation.
But wait , you cannot have inverse for all kind of matrices.Inverse is only possible for square matrices ,also called Non-singular matrix.
But why ?

Inverse transformation has to be consistent w.r.t matrix A as in
A * (inverse A) has to be equal to (inverse A) * A which is only possible if we have A as square matrix.

Some other kinds of matrices :
* Diagonal matrix — only diagonal elements are non-zero.It is easy to multiply diagonal matrices, computationally efficient.Inversion is efficient
* Symmetric matrix — transpose(A) = A , transpose means rows changed to column and column to rows.
* Orthogonal matrix — when all the columns in a matrix are perpendicular.
* Orthonormal matrix — when the norm of all the vectors in orthogonal matrix is unit. Orthonormal/orthogonal matrices have inverse equal to transpose.

DECOMPOSITION OF MATRICES

Just like factorization of numbers , we can factorize matrices into different matrices to understand properties related to matrices. These decomposition are actually the basis for the dimensionality reduction techniques(PCA, SVD).

  • Eigen- value decomposition is the first decomposition technique, where we decompose the matrix into eigen vectors and eigen values.
    As i mentioned above in the post about eigen vectors.These are the vectors which don’t change when you apply transformations to them.They might be scaled by a number but the vector in itself does-not change.The number by which they scale is called eigen value.
    So,the matrix A is written as the product of the eigen values and eigen vectors.We will look at its application in PCA, in the next article. Hold your horses till then.

Singular value decomposition is another technique used for decomposition of the matrices, but they are used for non square matrices.It has properties quite similar to eigen value decomposition just that , major difference lies in the non square matrix decomposition by SVD.

Its enough of things , i have touched here which sets our base for linear algebra. Next article we will see how PCA exploits above mentioned concepts.
We are just warming up for the convoluted effect of these concepts which leads to real life application of math called “Artificial Intelligence”

Stay tuned !! :)

--

--