일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | ||||||
2 | 3 | 4 | 5 | 6 | 7 | 8 |
9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 19 | 20 | 21 | 22 |
23 | 24 | 25 | 26 | 27 | 28 |
- r
- Google ML Bootcamp
- 데이터분석준전문가
- 데이터분석
- Python
- 시각화
- 머신러닝
- 데이터 분석
- 딥러닝
- Deep Learning Specialization
- 이코테
- 데이터 전처리
- 통계
- 코딩테스트
- 자격증
- 태블로
- sklearn
- IRIS
- ML
- pandas
- scikit learn
- SQLD
- 이것이 코딩테스트다
- matplotlib
- SQL
- tableau
- pytorch
- ADsP
- 회귀분석
- 파이썬
- Today
- Total
목록Neural Networks and Deep Learning (4)
함께하는 데이터 분석
![](http://i1.daumcdn.net/thumb/C150x150.fwebp.q85/?fname=https://blog.kakaocdn.net/dn/cL5RcH/btstbbsiDkp/ha1DbBz7Xf1D04HRkcc7oK/img.png)
Deep neural network notation Forward propagation in a deep network Parameters W[l] and b[l] Vectorized implementation Intuition about deep representation Forward and Backward fuctions Forward propagation for layer l Backward propagation for layer l What are hyperparameters?
![](http://i1.daumcdn.net/thumb/C150x150.fwebp.q85/?fname=https://blog.kakaocdn.net/dn/EudlR/btssUaOFtD0/YSGx3GALMIXkMqKpQbk2bk/img.png)
Neural Network Representation Computing a Neural Network's Output Vectorizing across multiple examples Justification for vectorized implementation Activation functions Why do you need Non-Linear Activation Functions? Derivatives of Activation Functions Gradient descent for neural networks Formulas for computing derivatives What happens if you initialize weights to zero? Random initialization
![](http://i1.daumcdn.net/thumb/C150x150.fwebp.q85/?fname=https://blog.kakaocdn.net/dn/q9Gtf/btssT56ESyf/p7bGUIUrhGLU5wPY5JN2Rk/img.png)
Linear Regression Logistic Regression Logistic Regression cost function Gradient Descent Logistic Regression Gradient Descent Logistic Regression Gradient Descent on m examples Vectorization import numpy as np import time a = np.random.rand(1000000) b = np.random.rand(1000000) tic = time.time() c = 0 for i in range(1000000) : c += a[i] * b[i] toc = time.time() print('for loop :' + str(1000 * (to..