Hello internet, I am a new user 🐣! Just trying out the medium platform, let's see how it goes! And to include some tips for me for the future 👍.

A Lazy Morning — Photo by Steven Lio

Display some of my favorite Math:

Einstein’s famous Mass-energy equivalence formula 🌌:

From wiki: The formula defines the energy E of a particle in its rest frame as the product of mass m with the speed of light squared ().

The 🔔 Curve :

With the probability density function:

Where μ is the mean or expectation of the distribution and σ is the standard deviation. i.e. variance of σ². It is also known as the Gaussian Distribution or more commonly known as the Normal Distribution.

An example graph of the function:

Produced in R (Code below)

To begin a code block in the medium editor:

  • Windows/Linux: Ctrl + Alt + 6
  • Mac: Command + Option + 6
  • or type ``` (triple backtick) on a new line
#Clear console and environment
rm(list = ls())
#attach required library
#set mu and sigma parameters for the Normal Curve
#generate data
y<-dnorm(x,mu,sigma) #density function
data<-cbind(x=x,y=y) %>% data.frame()
#Plot area settings:
,panel.grid.major.x=element_line(size = (0.2))
ggtitle(“Standard Normal Distribution”)+
#text for the sigma labels
x = seq(-3*sigma,3*sigma,sigma),
y = rep(-0.005,7),
label = xlab,
family = “”, fontface = 3, size=8) +
#text for the density values at each n*sigma area
x = c(seq(-3*sigma,3*sigma,sigma)-0.5*sigma,0.5*sigma+3*sigma),
y = round(c(p_lab[1],diff(p_lab),p_lab[1]),1)²+0.05,
label = paste0(round(c(p_lab[1],diff(p_lab),p_lab[1])*100,1),”%”),
family = “”, fontface = 8, size=8) +
#display parameter values
x = rep(2.2*sigma,2),
y = c(0.16,0.14),
label = c(expression(paste(mu,”=”)),
family = “”, fontface = 3, size=8) +
x = rep(2.4*sigma,2),
y = c(0.164,0.143),
label = c(mu,sigma),
family = “”, fontface = 3, size=7) + scale_x_continuous(breaks=seq(-3*sigma,3*sigma,sigma),limits=c(-3.5*sigma,3.5*sigma))+
ylab(“Probability Density”)+

Maybe some Linear Algebra:

To find a Plane of best-fit.

Given a data vector with n samples and p parameters:

Where the dependent variable y and the p-size vector of regressors x are assumed to be a linear relationship. Where the error variable ε was modeled such that it is the minimum and ideally it experiences an unobserved random variable. i.e. “noise”.

Then ideally we want to find β where the model has the form:

Or simply



Least-squares estimation (a.k.a. Line of best fit):

Since y and x are assumed to be a linear relationship and we would like to find the “best” β which solve the system of equations and to minimize ε. Hence let:

L is called the Loss function, essentially the error term is modeled with the input X, y, β. Since X, y is the original data we want to “fit”, therefore we can find the “best” β at which L(X, y, β) is minimized.

Hence the first derivative of L(X, y, β):

Since X, y is fixed and known and we only interested in the “best” β therefore:

This is the case for the Simple Linear Regression.

This concludes the Hello World! 🌎

Thank you for reading!

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store