4월, 2016의 게시물 표시

기계학습 강좌 week 2 - 1 from Youtube

이미지
Weekly Objectives •Learn the most classical methods of machine learning   - Rule based approach   - Classical statistics approach   - Information theory appraoch •Rule based machine learning   - How to find the specialized and the generalized rules   - Why the rules are easily broken •Decision Tree   - How to create a decision tree given a training dataset   - Why the tree becomes a weak learner with a new dataset •Linear Regression   - How to infer a parameter set from a training dataset   - Why the feature engineering has its limit 1. RULE BASED MACHINE LEARNING 1) A Perfect World for Rule Based Learning A perfect world with • No observation errors, No inconsistent observations • No stochastic elements in the system we observe • Full information in the observations to regenerate the system Sky, Temp, Humid, Wind, Water, Forecast -> EnjoySport 완벽하게 일관적인 관찰과 행동을 하는 위와 같은 3가지에 의해 perfect w...

기계학습 강좌 week 1 - 2 from Youtube

이미지
Weekly Objectives 1. Motivate the study on 2. Short questions and answers on a story    1) MLE    2)  MAP 3. Some basics 2. Short questions and answers on a story 4) Incorporating Prior Knowledge (최대 사후 확률) 𝑷(𝜽|𝑫) = 𝑷(𝑫|𝜽) * 𝑷(𝜽) / 𝑷(𝑫) 5) More Formula fromBayes Viewpoint 𝑃(𝜃|𝐷) ∝(비례) 𝑃(𝐷|𝜃)𝑃𝜃 𝑃(𝐷|𝜃) =(𝜃^𝑎𝐻) * (1−𝜃)^𝑎𝑇 𝑃𝜃=???? Binomial distribution 이 아닌 Beta distribution 을 사용할 것은 베이즈가 제안 𝑃𝜃=(𝜃^(𝛼−1)) * (1−𝜃)^(𝛽−1)/𝐵(𝛼,𝛽) 𝐵(𝛼,𝛽) = Γ(𝛼)*Γ(𝛽)/Γ(𝛼+𝛽)  Γ(𝛼)=(𝛼−1)! 위 두 식을 연관지어 생각해보면 𝑃(𝜃|𝐷) ∝ 𝑃(𝐷|𝜃)𝑃𝜃 ∝(𝜃^(𝑎𝐻))*((1−𝜃)(𝑎𝑇))*𝜃^(𝛼−1)*(1−𝜃)^(𝛽−1) =(𝜃^(𝑎𝐻+𝛼−1))*((1−𝜃)^(𝑎𝑇+𝛽−1)) 6) Maximum a Posteriori Estimation 전에는 MLE를 통해 𝜃hat을 구할 수 있었다. 𝑃(𝐷|𝜃) =(𝜃^𝑎𝐻) * (1−𝜃)^𝑎𝑇 𝜃hat = 𝑎𝐻/(𝑎𝐻+𝑎𝑇) 이번에는 MAP를 이용해 𝜃hat을 구해보면 𝑃(𝜃|𝐷) ∝ (𝜃^(𝑎𝐻+𝛼−1))*((1−𝜃)^(𝑎𝑇+𝛽−1)) 𝜃hat = (𝑎𝐻+𝛼−1) / (𝑎𝐻+𝛼+𝑎𝑇+?...

기계학습 강좌 week 1 - 1 from Youtube

Weekly Objectives 1. Motivate the study on 2. Short questions and answers on a story    1) MLE    2)  MAP 3. Some basics 1. Motivation 초반에는 머신 러닝의 예제를 주로 다룬다. 1) Supervised Learning  •데이터의 목적 값을 알려주고 훈련시키는 learning •Cases, such as   •Spam filtering   •Automatic grading   •Automatic categorization •Classification or Regression of   •Hit or Miss: Something has either disease or not.   •Ranking: Someone received either A+, B, C, or F.   •Types: An article is either positive or negative.   •Value prediction: The price of this artifact is X. •Methodologies   •Classification: estimating a discrete dependent value from observations   •Regression: estimating a (continuous) dependent value from observations 1) Unsupervised Learning  • supervision 없이 목적값을 주지 않고 패턴을 컴퓨터가 직접 찾는 learning •Cases, such as   •Discovering clusters   •Disc...