Soohyun’s Machine-learning

Univaraite Linear regression 본문

카테고리 없음

Univaraite Linear regression

Alex_Rose 2017. 10. 26. 10:38


this part from : https://www.toptal.com/machine-learning/machine-learning-theory-an-introductory-primer



"A Computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E." - Tom Mitchell, Carnegie Mellon University - 



So if you want your program to predict, for example, traffic patterns at a busy intersection (task T), you can run it through a machine learning algorithm with data about past traffic patterns (experience E) and, if it has successfully "learned", it will then do better at predicting future traffic patterns (performance measure P).


The highly complex nature of many real-world problems, through, often means that inventing specialized algorithms that will solve them perfectly every time is impractical, if not impossible. Examples of machine learning problems include,



"Is this cancer?"


"What is the market value of this house?"


All of these problems are excellent targets for an ML project, and in fact ML has been applied to each of them with great success. 


ML solves problems that cannot be solved by numerical means alone.


Among the different types of ML tasks, a crucial distinction is drawn between supervised and unsupervised learning. We will primarily focus on supervised learning here, but the end of the article includes a brief discussion of unsupervised learning with some links for those who are interested in pursuing the topic further. 











this part from : http://kr.mathworks.com/help/symbolic/mupad_ug/univariate-linear-regression.html


Univariate Linear Regression 



Regression is the process of fitting models to data. Linear regression assumes that the relationship between the dependent variable yi and the independent xi is linear:



Here a is the offset and b is the slope of the linear relationship. For linear regression of a data sample with one independent




with the positive weight Wi. By default, the weights are equal to 1. 


Besides the slope a and the offset b of a fitted linear model, stats::lineReg also returns the value of the quadratic deviation x^2. For example, fit the linear model to the following data:



x := [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

y := [11, 13, 15, 17, 19, 21, 23, 25, 27, 29]

stats::linReg(x,y)


result)

[[9, 2], 0]

 


















Comments