☰
🇧🇷 PT
🇺🇸 EN
Introduction
▶
Introduction
First Neural Network
▶
Introducing Our First Problem
Neural Network with One Weight
How Our Network Learns (Backpropagation Made Easy)
Learning Through Repetition (Epochs)
Understanding Weight Updates
▶
Introducing Our Second Problem
Trouble With Our Weight Updates
Fixing the Update Rule With Inputs
Gradient Descent, Loss and Cost functions
▶
Let's Get More Theoretical
Loss and Cost
Slope
Using Slope to Update
Unlocking Derivatives
Gradient Descent
The Role of Bias
▶
Third Problem
Why We Need Bias
How Does Bias Work — And How Do We Update It?
Implementing Bias in Code
Code to Cortex (artificial vs. biological neurons)
▶
From Biology to Code
Why Activation Functions Matter
Introducing Activation Functions
▶
The XOR Challenge
Activation to the Rescue
Adjusting Weights with Activation in Mind
Let’s Derive the Sigmoid Together(optional)
Introducing Hidden Layer
The Math Behind Hidden Layer Learning
Backprop Begins Here
From Theory to XOR
Building a Multi-Layer Neural Network
▶
Build a Network Workshop
A Matrix Makeover
Making a Modular Network
Let’s Test It
Try It By Yourself!
From Numbers to Images
▶
Stepping Into Images
What Is An Image?
Getting the Data
Data Normalization
Data Augmentation
Training and Testing
Conclusion & References
▶
Final Remarks
References
Conteúdo
Aqui será exibido o conteúdo dos módulos selecionados.