But what is a neural network? | Chapter 1, Deep learning

공유
소스 코드
  • 게시일 2024. 03. 28.
  • What are the neurons, why are there layers, and what is the math underlying it?
    Help fund future projects: / 3blue1brown
    Written/interactive form of this series: www.3blue1brown.com/topics/ne...
    Additional funding for this project provided by Amplify Partners
    Typo correction: At 14 minutes 45 seconds, the last index on the bias vector is n, when it's supposed to in fact be a k. Thanks for the sharp eyes that caught that!
    For those who want to learn more, I highly recommend the book by Michael Nielsen introducing neural networks and deep learning: goo.gl/Zmczdy
    There are two neat things about this book. First, it's available for free, so consider joining me in making a donation Nielsen's way if you get something out of it. And second, it's centered around walking through some code and data which you can download yourself, and which covers the same example that I introduce in this video. Yay for active learning!
    github.com/mnielsen/neural-ne...
    I also highly recommend Chris Olah's blog: colah.github.io/
    For more videos, Welch Labs also has some great series on machine learning:
    • Learning To See [Part ...
    • Neural Networks Demyst...
    For those of you looking to go even deeper, check out the text "Deep Learning" by Goodfellow, Bengio, and Courville.
    Also, the publication Distill is just utterly beautiful: distill.pub/
    Lion photo by Kevin Pluck
    Thanks to these viewers for their contributions to translations
    German: @fpgro
    Hebrew: Omer Tuchfeld
    Hungarian: Máté Kaszap
    Italian: @teobucci, Teo Bucci
    -----------------
    Timeline:
    0:00 - Introduction example
    1:07 - Series preview
    2:42 - What are neurons?
    3:35 - Introducing layers
    5:31 - Why layers?
    8:38 - Edge detection example
    11:34 - Counting weights and biases
    12:30 - How learning relates
    13:26 - Notation and linear algebra
    15:17 - Recap
    16:27 - Some final words
    17:03 - ReLU vs Sigmoid
    Correction 14:45 - The final index on the bias vector should be "k"
    ------------------
    Animations largely made using manim, a scrappy open source python library. github.com/3b1b/manim
    If you want to check it out, I feel compelled to warn you that it's not the most well-documented tool, and has many other quirks you might expect in a library someone wrote with only their own use in mind.
    Music by Vincent Rubinetti.
    Download the music on Bandcamp:
    vincerubinetti.bandcamp.com/a...
    Stream the music on Spotify:
    open.spotify.com/album/1dVyjw...
    If you want to contribute translated subtitles or to help review those that have already been made by others and need approval, you can click the gear icon in the video and go to subtitles/cc, then "add subtitles/cc". I really appreciate those who do this, as it helps make the lessons accessible to more people.
    ------------------
    3blue1brown is a channel about animating math, in all senses of the word animate. And you know the drill with KRplus, if you want to stay posted on new videos, subscribe, and click the bell to receive notifications (if you're into that).
    If you are new to this channel and want to see more, a good place to start is this playlist: 3b1b.co/recommended
    Various social media stuffs:
    Website: www.3blue1brown.com
    Twitter: / 3blue1brown
    Patreon: / 3blue1brown
    Facebook: / 3blue1brown
    Reddit: / 3blue1brown

댓글 • 7K

  • @EebstertheGreat
    @EebstertheGreat 6 년 전 +4985

    Most educational videos give viewers the impression that they are learning something, while in reality, they cannot reliably explain any of the important points of the video later, so they haven't really learned anything. But your videos give me the impression that I haven't learned anything, because all the points you make are sort of obvious in isolation, while in reality, after watching them I find myself much better able to explain some of the concepts in simple, accurate terms. I hope more channels follow this pattern of excellent conceptual learning.

    • @3blue1brown
      @3blue1brown  6 년 전 +886

      Huh, I never thought about it this way, but that's a nice way to phrase what I'm shooting for.

    • @user-ol2gx6of4g
      @user-ol2gx6of4g 6 년 전 +71

      Being able to explain it at a conceptual level isn't good enough. You can only understand it by practicing (i.e., build neural nets by yourself and play with it)

    • @3blue1brown
      @3blue1brown  6 년 전 +224

      busi magen, I don't have a story nearly as touching as that of you and your Grandmother, but I think I would cite my dad as teaching this by example when I was growing up, in that the way that he would describe things centered on what they're actually doing in simple terms, rather than on learning the appropriate jargon.

    • @davidgiles9378
      @davidgiles9378 6 년 전 +4

      :-D . Eebsterthegreat: not so obvious insightful complement in reality, as long as you don't read the "I haven't learned anything" part in isolation.

    • @user-ol2gx6of4g
      @user-ol2gx6of4g 6 년 전 +5

      Particle accelerator is used for creating/verifying hypothesis. Your analogy is terrible.
      Regarding learning a new skill, one needs to practice rather than just passively absorb information. This is why homework exists.
      Regarding neural nets, anyone think they can "explain" NN after watching this video is frankly laughable. (not saying the content of this video is bad)

  • @nickrollings8839
    @nickrollings8839 3 년 전 +3020

    Quote: “Any fool can make something complicated. It takes a genius to make it simple.”…..nailed.

    • @hexa3389
      @hexa3389 3 년 전 +30

      "What one fool can do, another can"
      Some guy who wrote a really popular calc textbook

    • @genericperson8238
      @genericperson8238 3 년 전 +32

      It's a statement I don't agree with. At university, we are taught things in a formal and abstract way, not just for the sake of overcomplicating things. I don't think professors, which are primarily researchers should be considered "fools" because they fail to teach their subject in a more intuitive manner.

    • @Solaris428
      @Solaris428 3 년 전 +94

      @@genericperson8238 a good researcher is not necessarily a good teacher.

    • @Dongdot123
      @Dongdot123 3 년 전 +12

      @@genericperson8238 Yes, they're a fool in pedagogy

    • @Solo-vh9fm
      @Solo-vh9fm 3 년 전 +16

      A genius doesn’t really make it simply more they make it concise.

  • @mheidari988
    @mheidari988 년 전 +910

    I am Programming for more than ten years and I never saw anyone explain a complex idea by such a clean and clear terms. Well done.

    • @ABHISHEKKUMAR-bl5wy
      @ABHISHEKKUMAR-bl5wy 9 개월 전 +5

      yeah thats power of manim !

    • @hritijrana1409
      @hritijrana1409 개월 전

      can you explain the animation at @9:30

    • @arjunkc3227
      @arjunkc3227 개월 전

      Because you only program without mathematics

    • @tentimesful
      @tentimesful 14 일 전

      secret scientist are much further than this, like they can wirelessly from satellite I think give dream images and change dreams you make yourself... for me they always try to make it ugly,...

    • @edwardmacnab354
      @edwardmacnab354 12 일 전

      Markov Chains ? how simple is that ?

  • @shivshankarpe
    @shivshankarpe 년 전 +1611

    I am blown away by the visual clarity of this description of otherwise a complex technology! More please, I am willing to pay!

    • @amitjose3739
      @amitjose3739 10 개월 전 +46

      thanks fellow indian bro.

    • @andrewl2787
      @andrewl2787 9 개월 전 +60

      = 2 USD😂😂😂😂😂😂😂😂😂😂😂😂

    • @Dtomper
      @Dtomper 9 개월 전 +369

      @@andrewl2787 Don't be rude bro, appreciate his good intention, 2$ might mean a lot from where he's from.

    • @HAL9000.
      @HAL9000. 9 개월 전 +74

      @@Dtomper Well said. Peace and love worldwide.

    • @shivshankarpe
      @shivshankarpe 9 개월 전 +131

      @@andrewl2787 Okay so how much did you pay? I will match you, common now

  • @AwesumBear
    @AwesumBear 4 년 전 +2941

    I can't wait for neural networking to be able to recognize my doctor's prescription.

    • @tinysim
      @tinysim 4 년 전 +69

      They need to study pharmacists to figure that out.

    • @pimp2570
      @pimp2570 4 년 전 +12

      That would be magnificent!

    • @user-qh2vq6md2g
      @user-qh2vq6md2g 4 년 전 +6

      from what i understand (i am also an dummy i just tell you what i think)
      the inputs are the pixels
      the weights are the pixels 's whiteness or blackness it is
      like lets say we need first pixel to be white
      so we need the computer to know there is a pixel there (hence it's an input)
      we need the computer to change how white or black it is (hence the computer's ability to change weights)

    • @jonavuka
      @jonavuka 4 년 전 +23

      its actually impossible, that level of calligraphy is indecipherable

    • @luisendymion9080
      @luisendymion9080 3 년 전 +17

      Doctors writing make Strings Theory a piece of cake for humans, AIs and aliens.

  • @ss_avsmt
    @ss_avsmt 년 전 +445

    No man, we don't get notifications for your videos. We search for 3b1b. That's how powerful your content is.

    • @FlyingSavannahs
      @FlyingSavannahs 년 전 +24

      I don't understand notifications either. What, we're supposed to do things other than watch all the remaining 3b1b videos we haven't yet seen between notifications? Who would be so wasteful with their lives???

    • @bruhngl
      @bruhngl 년 전 +7

      I just type questions into KRplus and always seem to get his videos as answers

    • @cvspvr
      @cvspvr 개월 전 +1

      ​@@FlyingSavannahsusually you'd enable notifications only for youtubers who make videos that you're almost always interested in

  • @benjaminmllerjensen8705

    I'm currently taking a computer science math course where the professor strongly advised everyone to watch this exact video series to get an intuition about what all the math is actually used for.

    • @vgdevi5167
      @vgdevi5167 년 전 +7

      Bro, which college you studying in now?

    • @benjaminmllerjensen8705
      @benjaminmllerjensen8705 년 전 +33

      @@vgdevi5167 Aarhus University, Denmark

    • @benjaminmllerjensen8705
      @benjaminmllerjensen8705 년 전 +13

      @@vgdevi5167 1st semester :)

    • @Nalianna
      @Nalianna 년 전 +2

      Good to learn from, and also, entertaining to watch. double win.

    • @snow3570
      @snow3570 년 전 +7

      Linear Algebra? That’s what I’m following in about 6 weeks, which is basically the math behind Neural Networks

  • @bambambhole8282
    @bambambhole8282 4 년 전 +2122

    In schools everyone taught us to practice maths but this man teaches us to imagine maths

  • @MrTyty527
    @MrTyty527 4 년 전 +506

    Around 2 years ago I was a sophomore statistics student and had no idea what deep learning is, until I met this video and 3b1b channel. His clear explanation of neural network and animations blew my mind. Since then I started my journey in machine learning. For a random reason I clicked onto this video again, and realized how long my journey in this field have been. This video really changed my life and I am really grateful about it.

    • @clubofsercettechnologies9135
      @clubofsercettechnologies9135 3 년 전 +6

      @3Blue1Brown
      Please give a heart .......

    • @yashrathi6862
      @yashrathi6862 3 년 전 +6

      I am in class 11 currently and unfortunately I am not able to understand this. Could you point me to some prerequisites?

    • @patrick2288
      @patrick2288 3 년 전 +2

      @@yashrathi6862 The linear algebra series that was recommended in the video is a good start, other than that you should keep watching this video and you will start to understand it better the more you do. I am also in class 11 and that is what helped me

    • @jialiu1796
      @jialiu1796 3 년 전 +12

      One year ago I met this video. I couldn't understand any single word in it. A year later, I am back and I still cannot understand it.
      I am fucking stupid.......

    • @manswind3417
      @manswind3417 3 년 전 +1

      @@yashrathi6862 To be honest there are no real "prerequisites" for learning neural networks, in the end it just gets down to how familiar you are with the concepts of basic graph theory. However, I admit that it can be pretty overwhelming for someone to try and comprehend all the stuff at once, which is why being savvy with the use of linear algebra is a must.
      Apart from that you should try your hand at programming once, perhaps the algorithmic mode of thinking would help you deveop an intuition for neural networks. And yes, of course try to explore graph theory, for neural networks will resonate much better with you once you do, imo.

  • @tvo18868
    @tvo18868 3 개월 전 +234

    Your videos are singlehandedly keeping my PhD research on track. Thank you for your time and effort!

    • @randomguy4738
      @randomguy4738 3 개월 전 +7

      This is what you're studying for your PhD? This is what I learned in highschool...

    • @blueberried9329
      @blueberried9329 3 개월 전

      Which class exactly did you learn about neural networks? Did you also learn multi-variable calculus (fundamental to even the simplest neural network) in your high school class? I would love to attend!@@randomguy4738

    • @P_Stark_3786
      @P_Stark_3786 3 개월 전 +31

      ​@@randomguy4738 learning is not about phd or high school it's about need
      Whenever you need you learn

    • @user-py3py5sx2f
      @user-py3py5sx2f 2 개월 전

      ​@@P_Stark_3786obviously there's a reason they are separated, you won't be awarded a PhD if what you "need" to learn isn't at PhD level

    • @muhammadhidayat1337
      @muhammadhidayat1337 2 개월 전 +17

      @@randomguy4738 What you learn, what he study are on the different level. Just shut your mouth son

  • @nityasingh3
    @nityasingh3 2 년 전 +328

    This video kickstarted my journey in ML a year back. Trust me, back then I watched this video three times to finally understand. It might be challenging for few to get it but when you get it, it just feels amazing

    • @chitranshsrivastav4648
      @chitranshsrivastav4648 년 전 +1

      @@DawnshieId why do you think it cannot go beyond 1?

    • @jackgrothaus2722
      @jackgrothaus2722 년 전 +4

      Felt like the brain chair meme when this video finally clicked (after the 4th watch)

    • @abinashkarki
      @abinashkarki 년 전

      @@chitranshsrivastav4648 How do you weight?

    • @cameleonarabic8124
      @cameleonarabic8124 7 개월 전 +3

      after 8:38 felt really hard to understand.. I will try again and comment back

    • @CSgof___yourself
      @CSgof___yourself 6 개월 전 +1

      Hell yeah. Im literally in your shoes rn

  • @kummer45
    @kummer45 5 년 전 +9790

    I study mathematics, physics and architecture. By definition this man is an ORACLE in the strict meaning of the word.
    With all honesty I never imagined someone explaining complex topics with the dexterity this man has. He is literally an institution and an outstanding teacher.
    The computer graphics and the illustrations are simply perplexing. This guy never evades complexity. He never evades complex arguments. He illustrate the complexity and dive into the exhaustive explanation of the details.
    It's extremely rare to see a professor and a dedicated user to put a lot of effort explaining, animating and describing mathematics the way he does.

    • @Aj-ch5kz
      @Aj-ch5kz 5 년 전 +73

      Well said sir. 🙌

    • @viharcontractor1679
      @viharcontractor1679 5 년 전 +216

      @@suwarnachoudhary8591 This video never claimed to be an expert level tutorial so stop comparing it to those type of tutorials.

    • @suwarnachoudhary8591
      @suwarnachoudhary8591 5 년 전 +39

      @@viharcontractor1679 When did I say that? Please read my comment again. I have no issues with the tutorial, I have objection on the comment to which I have replied. One should always make an appropriate comments. As it is incorrect to say something rude, it is always wrong to do false praising. Have you read the comment? of kummer45? Calling the tutor of the video as " Oracle"? Really? This kind of words should be used for someone like Swami Vivekanda and not for some ordinary tutorial. It almost hurts to see such misuse of words.

    • @JCake
      @JCake 5 년 전 +153

      @@suwarnachoudhary8591 Come on man, what is wrong with / about that comment? The video is fantastic in every way, It is dense enough that I've had to watch it several times over, yet is able to communicate the concept of a neural network in such a way that even my pea brain can grasp this topic, please think before commenting and make a proper comparison.

    • @suwarnachoudhary8591
      @suwarnachoudhary8591 5 년 전 +25

      @@JCake First of all, don't use the word 'man' , I am a girl. I never said video is bad.It is fine. Why everyone is coming over here and defending the video? Is is so difficult to understand what I am saying? The comment from kummer 45 is an exaggeration and I stick to it because it is. If the video is good enough , one does not need to watch it second time to understand the concept. I had seen one video by Mathew Renze on the same topic. That long tutorial, was first time I came across neural network. It was more that 1.30 hours of series of videos. I never watched it again and still remember every single concept. Now if this man is oracle what you will call him?

  • @hawkite3185
    @hawkite3185 3 년 전 +3425

    The fact that I was sent here by my university lecturer is a testament to how good 3Blue1Brown is.

  • @erwinschrodinger9693
    @erwinschrodinger9693 3 년 전 +2348

    In class : printf("Hello world");
    The exam :

  • @cliffrosen5180
    @cliffrosen5180 년 전 +551

    Brilliantly explained

    • @uncommonsense9973
      @uncommonsense9973 년 전 +9

      I had learned about neural networks and knew the mechanics of it. But this is way better explained - you nailed it - brilliantly.

    • @randompersondfgb
      @randompersondfgb 년 전 +18

      @@uncommonsense9973 Sorry to disappoint you but the commentor isn't the creator of the video lol

    • @wnyduchess
      @wnyduchess 년 전 +5

      @@randompersondfgb he was agreeing with the commenter bro

    • @randompersondfgb
      @randompersondfgb 년 전 +1

      @@wnyduchess To quote the reply itself; “this is way better explained - *you* nailed it - brilliantly”

    • @wnyduchess
      @wnyduchess 년 전 +2

      @@randompersondfgb yes, you're not understanding. Uncommon Sense is saying "you" nailed it. The you is Cliff Rosen, the original commenter. He's saying that Cliff Rosen nailed it when he wrote the comment "Brilliantly explained".

  • @buihung3704
    @buihung3704 5 개월 전 +12

    This is how you taught Deep Learning, people. I've seen lectures that either be categorized into 2 groups: too hard or too shallow/general. You have balanced between them. Thanks you so much!

  • @BhuvanGabbita
    @BhuvanGabbita 3 년 전 +1611

    It takes 3000-4000 lines of code to make those graphics possible, he's a freakin legend

    • @omarz5009
      @omarz5009 3 년 전 +29

      Which is best for neuron? python or c++

    • @jagaya3662
      @jagaya3662 3 년 전 +204

      @@omarz5009 The main downside of python is the fact it's a high-level language and hence kinda slow. But for ML and NN it has several powerful libraries (pandas, numpy, tensorflow) which make up for that. Given Python supports the implementation of C-Code, those libraries could be optimized like heck to the point bothering with the stuff in C++ is just wasted time. Plus Python is much easier to learn, hence more people use it and develope for it.

    • @omarz5009
      @omarz5009 3 년 전 +9

      @@jagaya3662 That makes sense. Thanks for explaining :)

    • @omarz5009
      @omarz5009 3 년 전 +5

      ​@@anelemlambo497thank you for explaining :)

    • @Sujitth
      @Sujitth 3 년 전 +7

      How these graphics and animations were made actually?

  • @blurr3272
    @blurr3272 3 년 전 +121

    this is my first introduction to machine learning and I watched this only twice to get it, really goes to show how good of a teacher this guy is, the effort he puts in is nothing short of amazing !

    • @filippians413
      @filippians413 2 년 전 +11

      Definitely am gonna have to watch it again. Got half way through and it started to get pretty heavy

  • @samethingsmakeuslaughmakeuscry

    I am currently doing my Master's in Data Science and this 18 minute video is better than any course I have taken so far

  • @shubhampipada3130
    @shubhampipada3130 개월 전 +2

    My God! No words to express as to how you made such a complex topic to be understood using visuals so easily! Hats off!!

  • @akshunair3367
    @akshunair3367 3 년 전 +242

    Our generation is lucky to have mentors like you, thank you so much sir!

    • @nadanfrenkiel763
      @nadanfrenkiel763 2 년 전 +2

      Fine Indeed, Refreshing Super Tenacious

    • @elgary9074
      @elgary9074 2 년 전 +1

      This is the 80s generation we were listening rock music and looking how to get things done better we grew without mobile phones just sitting front of a computer or playing basketball outside in the park. We grew without rap, hip-hop, either thinking that the gang is a cool guy! this is what now generations require badly!

    • @MiguelAngel-fw4sk
      @MiguelAngel-fw4sk 2 년 전 +12

      ​@@elgary9074 I hope that someday scientists will be able to understand what you have written.

    • @paromita_ghosh
      @paromita_ghosh 11 개월 전

      @@MiguelAngel-fw4sk 🤣

  • @someeng5043
    @someeng5043 4 년 전 +222

    This is my very first time commenting on a KRplus video, and it's just to say: This is the best explanation of anything ever.

    • @NepalSadikshya
      @NepalSadikshya 4 년 전

      yet, people don't understand

    • @tommyproductions891
      @tommyproductions891 4 년 전

      Some Eng congrats man

    • @the7th494
      @the7th494 4 년 전

      i couldn't understand anything over a minute in

    • @bensfons
      @bensfons 4 년 전 +5

      Wait until you see his video about the Fourier Transform. My GOD that vid is the best thing i've seen in ages.

  • @zulucharlie5244
    @zulucharlie5244 년 전 +6

    This channel and the visualizations it produces to teach subjects like this one is the best advance in the history of communicating mathematical ideas. It's extraordinarily inspiring that one person can have such a large impact on the world today (and for generations to come). Thank you, Grant Sanderson.

    • @test-sc2iy
      @test-sc2iy 6 개월 전 +1

      Dude, inspiring comment yourself.

  • @luukburger
    @luukburger 년 전 +9

    I just love the way the concepts of neural networks are explained in this video. After watching it, you feel like you have an idea about the "building blocks" of a neural network. Since I'm new to the topic, it's hard to judge whether crucial things are left out or over-simplified, but I feel it's a great introduction to the topic. Thanks a lot for sharing this!

  • @AmeraldFang
    @AmeraldFang 5 년 전 +466

    Is anyone else nominating this series for the "Distill Prize for Clarity" in 2019? I really think he deserves it, excellent visualizations.

    • @WepixGames
      @WepixGames 5 년 전 +12

      Yeah I would, every day of the year.

    • @buburayam6557
      @buburayam6557 5 년 전 +5

      Totally! Animation and visualization here makes understanding as clear as a crystal!

    • @milanpandey8431
      @milanpandey8431 5 년 전

      yesssss

    • @lmf1799
      @lmf1799 5 년 전

      @@benisrood Can it be nominated for anything else?

  • @tadm123
    @tadm123 4 년 전 +616

    I'm studying AI for my masters degree and my professor told everyone to watch this video to understand the concept :D

  • @Betamax84
    @Betamax84 7 개월 전 +9

    This the most comprehensive and understandable explanation of a neural network. Thank you.

  • @nueno3816
    @nueno3816 년 전 +25

    Another reason to be mentioned on why ReLU is used instead of Sigmoid is simply the fact that it calculates a lot simpler (obviously cutting negative values vs. exponential operations). Plus another important issue of the σ function is it's gradient which is always below .25. Since modern networks tend to have multiple layers and because multiplying multiple values < 1 quickly becom really small (vanish) networks with a larger number of layers won't train when using Sigmoid.
    And as always, amazing video, animation and explaination!

  • @shaileshrana7165
    @shaileshrana7165 3 년 전 +956

    As a person who has self-learned a bit of python and is just trying to learn this stuff, this is exactly the best place to begin.

    • @DeFabulisHistoria
      @DeFabulisHistoria 3 년 전 +14

      My thoughts exactly!

    • @duykhanh7746
      @duykhanh7746 2 년 전 +7

      At the end of the video, he showed the relu function f(a)=a with a>0, so the value of the neuron doesnt have to be between 0 and 1?

    • @funwithstudy2333
      @funwithstudy2333 2 년 전 +1

      That's me

    • @B20C0
      @B20C0 2 년 전 +29

      @@duykhanh7746 A bit late but if your question hasn't been answered yet: It doesn't really matter if you have a value >1. Basically anything above 0 is an activation and you can also view it as the size of "a" being the intensity of the activation. Biological neurons can also be more active by firing in fast succession (up until they reach the maximum possible firing rate of like 250-1000Hz depending on the source), but you don't want to introduce things like loops in artificial neurons to not slow down your network. So to simulate this kind of behavior, you just let your output get bigger. You can compensate for the lack of an upper limit in the following neurons by adjusting the weights and the biases.
      TL;DR: No. :D

    • @Skynet_the_AI
      @Skynet_the_AI 년 전

      🙂

  • @sauravvagarwal
    @sauravvagarwal 4 년 전 +324

    THE TEACHING ASIDE , THOSE GRAPHICS MAN! TAKES LOT OF EFFORT!

    • @deepak4u23
      @deepak4u23 3 년 전 +9

      Exactly....Lot of effort is required to make this type of video.

    • @hmm7458
      @hmm7458 3 년 전 +3

      why iam seeing Indians everywhere

    • @kartikeya9997
      @kartikeya9997 3 년 전 +6

      @@hmm7458 cause u are also an indian...

    • @pandatobi5897
      @pandatobi5897 3 년 전 +4

      @@kartikeya9997 that's not an answer lmaooo

    • @ryanwhite7401
      @ryanwhite7401 3 년 전 +1

      I know I can't do better. I'll be referring students in my neural networks class to these videos, lol.

  • @anudeepayinaparthi7493

    Every few years I come back to watch this series. The most intuitive and understandable explanation of neural networks that exists

  • @sangwookim5551
    @sangwookim5551 년 전 +3

    3Blue1Brown is the go-to channel that explains complex math concepts with the highest clarity without any loss of complexity of the topic. Simply brilliant!

  • @MrJonndoe
    @MrJonndoe 4 년 전 +8

    One of the few teachers that don't make you feel stupid, but actually help you understand the topic. I appreciate the time you spend on this.

  • @SaifUlIslam-db1nu
    @SaifUlIslam-db1nu 5 년 전 +698

    Written some notes from the video to read quickly. Hope it helps somebody.
    l Neural Networks can recognize hand written digits, letters, words ( in general, tokens )
    l What are Neurons?
    ○ Something that holds a number [ 0, 1]
    ○ The higher the number, the higher the "activation" rate
    l Consider a 28*28 table in which each unit is represented by a value between 0 to 1 ( activation number )
    ○ Let us divide each row into a "layer", such that, if we were to divide all the layers, the last layer would contain 10 "cells" ( units ).
    ○ Values are passed from the previous cells to the last layer ( 10 unit layer ), again, between 0 and 1. The higher or closer the value is to 1, the more probability exists that the image scanned represents that unit cell.
    So, a unit cell that contains the highest value is indication that the index of the unit cell is the value of the image scanned.
    ○ 16 cells in the second and third last cells are arbitrary.
    ○ Each cell is linked ( causes activation ) to some ( not all ) other cells in the next layer which further cause more activation.
    ○ Each 'cell' corresponds to some sort of identification about how much a certain region 'lights up', and then sends a value to another node which reacts based on the received value.
    ○ To find whether a certain cell with light us, like each cell be represented by 'a Cell_Number ', and let each cell be 'assigned' a certain weight 'w'. The sum of all the products of each cells 'a' and 'w' will be:
    w1*a1 + w2*a2 + w3*a3 + w4*a4 + … + wn*an
    ○ Let these weighted sums represent some 'grid cell'. Each cell is either 'on' or 'off' with respect to being positive or negative. In this case, 'green' represents on, and 'red' represents off.
    ○ Let us concern ourselves to a certain region where the cells are mostly on. Ergo, we would be basically summing up the weightages of those grid cells.
    ○ Then, if you suppose a region where there are brighter grid cells in some part which are surrounded by dark grid cells, then that area is the main edge we're looking for.
    ○ Of course the sum of weightages gives us very different value. In order to 'squish' that number line into 0 and 1 , we use the function:
    Sigma(x) = 1/(1 + e^-x)
    Which is a sigmoid function or a Logistic Curve. Our equation now becomes:
    Sigmoid(w1*a1 + w2*a2 + w3*a3 + w4*a4 + … + wn*an)
    ○ But what if you don't always want to light up when it's a positive value, and rather want it to light up when the weighted sum of that grid cell full fills some condition, such as > 10. This is called 'Bias For Inactivity'. Using this example, our equation becomes,
    Sigmoid(w1*a1 + w2*a2 + w3*a3 + w4*a4 + … + wn*an - 10)
    Here, 10 is the "bias".
    ○ The possibilities of the different knobs and dials open us to the term of "Learning", which just means to find the correct relation of values which perform the expected behavior.
    ○ The complete expression above can be adjusted in the formula:
    a(1) = Sigma(W*a(0) + b )
    ( (1) and (0) are superscript here )
    Where W = k*n matrix whose elements are weights corresponding to a cell.
    a(0) = n*1 matrix whose elements are the 'a' of each cell.
    b= n*1 matrix whose elements are the biases of each cell
    ○ NOTE: Sigmoid function is not used very often now, instead it is replaced by ReLU ( Rectified Linear Unity ), which is defined as:
    ReLU(a) = max(0, a), a linear function where f(a) = a for a>= 0, which for a < 0, f(a) = 0.

  • @masoudnazeri9906
    @masoudnazeri9906 2 년 전 +3

    This was one of the best tutorials on the fundamentals of neural networks. Formerly, I was a dentist and now a neuroscience research fellow working on computer vision applications in behavioral neuroscience and have never encountered a tutorial explaining so simple and concise, Thanks for that :-)

  • @EMEducation
    @EMEducation 년 전 +3

    I absolutely love this video, I saw this video a few months ago and couldn't understand it too well until after I went into linear algebra and took a course on deep learning. This explanation is still one of the best ones I've ever seen. I truly respect your work here and please keep it up!

  • @JayHendren
    @JayHendren 6 년 전 +138

    @3Blue1Brown - A quick suggestion: Red-green color deficiency is the most common form of colorblindness. When trying to represent information via a color spectrum, could you please choose colors other than red and green for this reason? Red and blue are good choices because they are distinguishable by both red-green color deficient people as well as blue-yellow color deficient people, which is the second-most common form of colorblindness. I was completely unable to tell which pixels have positive weights and which ones had negative weights in your example due to my colorblindness. Thanks, and keep up the fantastic videos :)

    • @sergey1519
      @sergey1519 5 년 전 +4

      Upper row of this white zone had negative weights, central part had positive, and bottom row had negative weigths.This means that if you have horizontal line this neuron will have high values, but if vertical line or any other patern then it will have value that is closer to 0.

    • @BreaksFast
      @BreaksFast 5 년 전 +34

      windows 10 has colour filters that will fix this for you. go to settings, ease of access, and click on 'colour filters'

  • @thisaintmyrealname1
    @thisaintmyrealname1 4 년 전 +36

    "Even when it works, dig into why" - 3B1B. Your lessons are pure gold sir. I'm here after watching the entire Essence of Linear Algebra. Thank you.

  • @nisargjoshi8236
    @nisargjoshi8236 2 개월 전 +1

    Even after 6 years from making of this video, when we already have something so advanced like GPT4, as a humble beginner in this domain, this video is so so valuable in understanding the very basics! Huge thank you and kudos sir!

  • @patriciocastillo2772

    You have been a key element on my Machine Learning education! I am a visual person. So your videos help me so much to understand the concepts behind the math.

  • @skepticmoderate5790
    @skepticmoderate5790 6 년 전 +69

    I just watched Welch labs machine learning playlist a few weeks ago. It was mind-blowing. I'm glad you're getting into machine learning too! : )

  • @katariegels258
    @katariegels258 2 년 전 +44

    I am just astounded. I spent so much time trying to understand this concept. Everywhere I looked people would show the similar neural network animation, but no one ever really explained and exemplified every single step, layer, term and mathematics behind it.
    The video is really well structured and with amazing animations. Extremely well done. My mind is so blown I can barely write this comment.

  • @agustinjoe
    @agustinjoe 년 전 +1

    I saw this video when it first came out, I was amazed at how Deep Neural Networks worked and how he could explain them so clearly. Now I am working as a Machine Learning engineer. Thank you so much for helping me find my passion

  • @pamr001
    @pamr001 3 개월 전

    I know you read this all the time, but I must say it. You videos are simply incredible! Your work reshapes education. You deserve every cent that this platform puts in your pocket.

  • @DavidGPeters
    @DavidGPeters 4 년 전 +375

    how is it possible that I can lie in my bed on a Sunday and am presented with mind-boggling cutting edge knowledge told by an incredibly soothing voice in a world class manner on a 2K screen of a pocket supercomputer basically for free

    • @smit_1449
      @smit_1449 3 년 전 +40

      Welcome to the 21st century

    • @Unstable_Diffusion89
      @Unstable_Diffusion89 3 년 전 +26

      yet 90% of people use that supercomputer to mindlessly scroll feeds.

    • @Unstable_Diffusion89
      @Unstable_Diffusion89 3 년 전 +11

      @@Charge11 And software engineering advancements, thousands of years of intellectual history, biological evolution of conscious brains and so forth.
      point is, it's miraculous if you step back far enough.

    • @stargrabbitz6726
      @stargrabbitz6726 3 년 전

      because it isn't

    • @ericvelasquez1282
      @ericvelasquez1282 3 년 전 +7

      It's not free, Google's massive network of AI neuron is harvesting terabytes upon terabytes of information about you every time you click on anything.

  • @efulmer8675
    @efulmer8675 3 년 전 +64

    3Blue1Brown
    "Sigmoid Squishification Function": 11:23
    Most brilliantly named function I have ever heard named. Absolutely brilliant. The merger of the technical with the simple with a double alliteration for easy memory.

  • @nithinkandula4346
    @nithinkandula4346 11 개월 전 +1

    The way u guys animate and explain the difficult topics in the most simple visual way is impressive...kudos and thank you for the efforts.

  • @mittensmacks9239
    @mittensmacks9239 8 개월 전

    I have to be honest...During school and college I never was a fan of math at all....But this channel is actually changing my mind. I even bought a math book!!! This is a good channel!

  • @christianaustin782
    @christianaustin782 6 년 전 +615

    PART 1? THERE WILL BE MORE? YAS 3BLUE1BROWN IS DOING NEURAL NETWORKS! TODAY IS A GOOD DAY

    • @HowardMullings
      @HowardMullings 6 년 전 +1

      You will find this series very helpful as well.
      krplus.net/bidio/ktydZIRfiG-8g6Q

    • @atlas7425
      @atlas7425 6 년 전 +34

      I totally agree, my friend. Today is a very important day in the history of youtube mathematics. And since I am the 100th person who liked your comment, I would like to give a little inspirational speech:
      To all mathematicians, physicists, engineers, computer scientists or people who want to become one of those in the future,
      today is a very important day. The best youtube mathematician, 3Blue1Brown, has made a video about neural networks and plans to make others about it in the future. I think it's not necessary to explain the inherent significance this topic has concerning the future of our technology and our understanding of the universe and the processes going on in it. These videos will help the new scientific generations to cope with the structures still to be found and to bring on a new and deeper understanding of the things that have been found and examinated before. Humanity is reaching a point, where the wish to understand the world is higher than it has ever been before. You, dear future scientists, can all be a part of the progress we are just going through, you just have to have the Will and the Strength for it, never give up if things aren't working properly or as you expected and always remember: At the end, everything will be fine, so if it isn't fine, it's not the end.
      Actually, I have reached the end of my little inspirational speech (and it is fine ;) ), and to complement it well, I want to quote a famous poem which plays an important role in a very good and famous science fiction movie....
      "Do not go gentle into that good night,
      Old age should burn and rave at close of day;
      Rage, rage against the dying of the light.
      Though wise men at their end know dark is right,
      Because their words had forked no lightning they
      Do not go gentle into that good night.
      Good men, the last wave by, crying how bright
      Their frail deeds might have danced in a green bay,
      Rage, rage against the dying of the light."
      Thank you.

    • @xipity
      @xipity 6 년 전 +1

      This comment is lit!

    • @zionj104
      @zionj104 6 년 전

      yas

    • @hawaiijim
      @hawaiijim 6 년 전

      RNN? LSTM?

  • @paulah1639
    @paulah1639 6 년 전 +132

    This is the best intro to neural networks I have ever seen. The presentation is excellent! The animations are very very very helpful especially in understanding the formulas and matrices and how they came to be. Thanks a million. Looking forward for the next one.

  • @mgonetwo
    @mgonetwo 년 전 +6

    Thanks for all the effort you put into the work! Cannot stress enough how glad and inspired I am for having people like you on the planet.

  • @nicksohacki7114
    @nicksohacki7114 년 전 +47

    amazingly well-explained. hats off to you, good sir

  • @shaktisingh3864
    @shaktisingh3864 2 년 전 +616

    One “like” is not enough for the work that has gone into making one such video. This video should be part of the curriculum and he should get the royalty for this. Awesome work!

    • @natew4724
      @natew4724 년 전 +3

      Yes!

    • @benjaminmllerjensen8705
      @benjaminmllerjensen8705 년 전 +8

      I'm currently taking a computer science math course where the professor strongly advised everyone to watch this exact video series to get an intuition about what all the math is actually used for.

    • @drunkpy1590
      @drunkpy1590 5 개월 전

      +1

  • @_mto
    @_mto 6 년 전 +164

    Neural networks is a topic I've wanted an intuitive understanding of for a while. 3b1b has the most intuitive explanations on KRplus.
    This video could not be any better.

    • @alchemicalmoon3426
      @alchemicalmoon3426 6 년 전

      MTO Intuitive understanding?

    • @bobbob3630
      @bobbob3630 6 년 전 +2

      It isn't intuitive understanding if you have been looking for a explanation in a while xd

    • @Fermion.
      @Fermion. 6 년 전 +1

      N·J Media - Intuitive understanding is understanding that in a triangle, for example, the side across from a given angle has to increase or decrease in length relative to its opposite angle, without a mathematical proof.

  • @Suryakanthi1982
    @Suryakanthi1982 개월 전

    This is simplest and best way to explain neural networks. This is the best introductory video on Neural networks I watched so far.

  • @NoobJang
    @NoobJang 10 개월 전 +2

    neatly explained, never thought machine learning would be this easy to comprehend
    most of the youtubers out there just throw bunch of concepts out without actually explaining the relation and the logic of the whole design
    most of the time i thought i was too stupid to understand these things and that i must take a specialized college course or sth
    but turns out that wasnt the case, its just that youtube needs more people like you who can explain things with sophisticated details in simple language
    thanks for your contribution bro

  • @abdulhadishoufan5353
    @abdulhadishoufan5353 6 년 전 +4

    Behind this material is an extreme shot of giftedness. Explaining something is not easy. You first need a solid physical model for the topic in your brain and then you need to translate this model into a mental model that can be faithfully exported into others' brains. I congratulate you for this excellent job and I hope that you appreciate what you are and what you are doing. This is much more important than how much money this business brings.

  • @abhiram6329
    @abhiram6329 3 년 전 +215

    Every second of this video is a Pre-requisite to the next second of the video :D

  • @monome3038
    @monome3038 개월 전

    it's incredible and inspiring to witness someone teaching a subject where they know all the details, all the whys and hows behind each detail. I can only thank you and say that you inspire me to enjoy learning especially if i can help someone learn with more ease a subject that was hard for me to gasp. I wish you all the best and thank you so so so much

  • @SamoFix
    @SamoFix 10 개월 전 +2

    Never see anything similar than this. Unbelievable that such understandable visualization of illustration can be done.
    Crazily hard and great work!

  • @marcellod.7290
    @marcellod.7290 2 년 전 +38

    I am a Data Scientist and I would like to tell you THANKS.
    I have NEVER met anyone with the ability to teach complex things in this way.
    A M A Z I N G.
    Please continue like this, for example with other statistics videos. You can substitute many of the University courses.

  • @amir650
    @amir650 6 년 전 +252

    The best introduction to Neural Net's I've ever seen. Kudos!

  • @Brillibits
    @Brillibits 년 전 +4

    This was one of the first videos I watched when learning machine learning. Now its my job! :)

  • @cakec9
    @cakec9 년 전

    i have recommended this channel in every single conversation I had in 2022. Thank you!

  • @rahulsundaresan218
    @rahulsundaresan218 6 년 전 +53

    This channel is so damn good. Other channels give some terrible analogies and some other explain it in extreme technical detail. This strikes the perfect balance and provides a foundation to understand the more technical details

    • @Mosfet510
      @Mosfet510 6 년 전

      I wish this guy was my math teacher back in high school.

    • @Certio0
      @Certio0 6 년 전

      Just shows that good teaching skills are very rare.

    • @MrFredazo
      @MrFredazo 6 년 전

      Understandable animations on the perfect timing with the words, and no holes on the explanations, makes the trick

  • @jsnadrian
    @jsnadrian 3 년 전 +12

    watching this for a second time and i can't believe how illuminating is to come back to the basics and get a renewed understanding -- grant, you're a treasure

  • @dimitriospapadopoulos2959

    Best explanation for a neuronal network. Some other sources just tell about backtracking and stuff, and your like "for what do i need that", what is the bias for, etc.
    This series explains everything well. After that you can watch any other video, as they do not cover the basic idea.

  • @leeowwh
    @leeowwh 년 전 +7

    Hey Grant. Great video btw. Just a question, at 14:50, why is that the last bias' index is 'n' and not 'k'. I thought you mentioned that bias would be added for each node of the next layer, which as per your representation, says that next layer has 'k' nodes.

  • @brianbarefootburns3521

    Finally, a video that does more than just present some neurons and layers and say, “here’s an activation function.” Your video describes how the model is developed and why the algorithmic approach is appropriate for the problems neural networks try to solve. Thanks!

  • @Mypersonalyoutube123
    @Mypersonalyoutube123 4 년 전 +2457

    Bio teacher: what is a neuron?
    Me: a thing that holds a number between 0 and 1

    • @zbr4cker117
      @zbr4cker117 4 년 전 +29

      lmao good one.

    • @prabeshpaudel5615
      @prabeshpaudel5615 4 년 전 +109

      get out of my class

    • @dr3v1l1993
      @dr3v1l1993 4 년 전 +59

      You could actually associate synapses and their enhancers (weight > 0) as well as their repressors (weight < 0) as the weights and the target-neurons as the number between 0 and 1.
      Then the amount of input the target-neuron gets determines if it will fire itself (and how strong).
      So our brain works very similar to the artificial neural networks we created and it just a matter of time when there will be an AI that can genuinely learn like we can (-> artifical general intelligence or AGI) and therefore be able to surpass all of us (-> artificial super intelligence or ASI).

    • @haykg
      @haykg 4 년 전 +16

      InSomnia DrEvil Great explanation, but you ruined the joke lmao

    • @SuperBhavanishankar
      @SuperBhavanishankar 4 년 전

      @@prabeshpaudel5615 haha

  • @chaoszero6867
    @chaoszero6867 년 전 +1

    You are a beautiful human being. I've watched you even before college, just out of interest and curiosity. Now I'm in college, and I can't thank you enough. Math is so much more amazing when you have many different eyes to view it in. Also I'm currently taking linear algebra and an intro to neural networks.

  • @mdthelegend9882
    @mdthelegend9882 년 전 +3

    Wow. This is one of best videos on deep learning I've ever seen.

  • @FacultyofKhan
    @FacultyofKhan 6 년 전 +1308

    Thank you 3b1b. This video certainly gave me a deep enough understanding to allow my neural networks to retain the information.
    EDIT: seems like I'm not the only one making lame puns about the title.

    • @PeterNjeim
      @PeterNjeim 6 년 전 +5

      For the first argument in the video: "You can recognize that all of these images are 3's, even though the pixels are very different." is complete bullshit. Handwriting varies *_EXTREMELY_* person by person and so humans are very used to looking at different ways to write the same thing, especially with things like cursive. It's not a surprise that we can identify the images, please don't talk like it is a surprise, makes me feel like you're less intelligent than you really are.

    • @Yuras20
      @Yuras20 6 년 전 +32

      Calm down a little... Everything what's been said in this video is in context of machine learning, computers, mathematics, algebra etc. So if we want to treat brain as a complex computer than it's function to recognize letters from pixels is amazing and give food for thought how human's brain really works.

    • @Rurexxx
      @Rurexxx 6 년 전 +29

      Peter Njeim it's not a surprise that you can identify images. The surprise is how complicated image recognition actually is if you think about it.

    • @bruno_sjc_
      @bruno_sjc_ 6 년 전 +53

      Peter Njeim, do people invite you for parties?

    • @jonathanmercedes3583
      @jonathanmercedes3583 6 년 전

      Faculty of Khan M

  • @20sur20edu
    @20sur20edu 3 년 전 +61

    This will go down as one of the best lectures in history. What an amazing and concise explanation of something I thought I would never understand ...

  • @rigeshyogi
    @rigeshyogi 년 전

    This is by far the best explanation I have seen to understand how a neural net works. Kudos.

  • @StuartVinton
    @StuartVinton 년 전 +2

    I’ve been messing around with building neural networks and, honestly, I come back to this video time and time again; it’s a bit mind blowing; that means I’m essentially training my brain, i.e. building a biological neurological pathway to remember how artificial neurological pathways work! 🤯

  • @MattBargain
    @MattBargain 3 년 전 +10

    I work in a company developing just this kind of stuff. I’m still baffled how incredibly intelligent people are and I have no idea how they can repeatedly accept me as worthy enough to be with them.

    • @blackbriarmead1966
      @blackbriarmead1966 2 년 전

      impostor syndrome. There will almost always be someone better than you, but you are probably better than you give yourself credit for

  • @vimalalwaysrocks
    @vimalalwaysrocks 2 년 전 +6

    ML grad student here and hands down Grant covered an entire chapter concisely and very clearly in this video. I don’t think reading any academic books will give you this amount of intuition on this subject within a few minutes. Still mesmerized by the effort!

  • @Lawls
    @Lawls 년 전 +3

    Thank you so much for making this. I just finished a lecture on this and didn't understand it at all. In 5 minutes here I understood so much more.

  • @eengpriyasingh706

    KRplus doesn't send notification for this video. best lectures you provide.....even after joining many courses this platform only clears my doubts

  • @akshayasubramanian4311

    This is the first time I'm commenting on a KRplus video and honestly, I'm so thankful people like you exist! I wish only the best for you in whatever you do!

  • @tessdejaeghere6972
    @tessdejaeghere6972 3 년 전 +5

    You're the first person to explain bias in an intuitive manner. Thank you.

  • @user-ls1dq5gu1j
    @user-ls1dq5gu1j 5 개월 전

    Explaining something like Deep Learning with so much clarity is very hard to find, hats off to you and thank you for sharing the knowledge!

  • @scriptles
    @scriptles 7 개월 전 +13

    AI, neural networks, machine learning... these things are going to be a HUGE part of our future.

  • @agustindangelo1412
    @agustindangelo1412 5 년 전 +109

    Wow a lot of things that i've learned on this first year of system engineering are captured on this video, but previously I didn't understand the real essence of it. Thank you for these amazing vids! Greetings from Argentina :)

  • @seanbaeker4310
    @seanbaeker4310 5 년 전 +6

    I am not really from a math background but I am hugely interested in programming, and I must say this video has made it easy for me to understand the math behind neural networks!
    I loved it , thank you!!!

  • @jrangel6725
    @jrangel6725 2 개월 전

    I don’t even speak English perfectly but the way that you explain is even better than a person who knows about this in my language, thanks a lot

  • @RealSchimpa
    @RealSchimpa 년 전 +1

    I like the way how this complex topic is explained in a visual way. Makes it easy to understand. 👍

  • @nicolasderoover7210
    @nicolasderoover7210 2 년 전 +95

    I'm in my first year of engineering, looking to go into CS, and this video makes me extremely excited for my coming education. I've already watched so many of your videos, and they've all had a similar effect. Thank you so much!

  • @nadaelnokaly4950
    @nadaelnokaly4950 4 년 전 +12

    seriously, this is the first time i find that ML makes sense! you are amazing

  • @marklord7614
    @marklord7614 23 일 전

    I just had to stop this video to comment. This video is next level. The best explanation of neural networks I've encountered.

  • @jamescarr2191
    @jamescarr2191 7 개월 전 +11

    Fantastic visualized learning!

  • @codemaster1768
    @codemaster1768 2 년 전 +7

    It took me one week to understand this when I was reading a university lecture. You explained it to me in 20 mins. You are such a savior. Thanks 3Blue1Brown!

  • @rohitgavirni3400
    @rohitgavirni3400 6 년 전 +25

    What a time to be alive, with such KRplusrs around!

  • @unebonnevie
    @unebonnevie 년 전

    The brain is SO incredible! VERY fast in access time in searching and recognition with the eyes/ears/the tongue and pretty much unlimited storage without paying Google/Amazon monthly fees! The hard part is recognizing via tastes and sounds and associating to memories/experiences. An amazing organic machine!

  • @mattstrong8281
    @mattstrong8281 개월 전 +1

    I remember watching this years ago in my undergrad when I was fumbling around with ML for fun. This got me on track, and I can confidently say is one of the biggest reasons I am now a CS PhD student at an excellent university studying robotics and AI.

  • @aishasyed9756
    @aishasyed9756 2 년 전 +14

    Can't believe how well explained and intuitive this is. I aspire to become a teacher like you.

  • @DevashishGuptaOfficial

    The most intuitive channel on KRplus...

  • @giovanni-cx5fb
    @giovanni-cx5fb 6 년 전 +12

    Most fascinating channel on YT, hands down.

  • @churchofmarcus
    @churchofmarcus 3 년 전 +6

    Currently doing my capstone on deep learning and this is among the best, and easiest to understand descriptions I have seen.