Let's build GPT: from scratch, in code, spelled out.

공유
소스 코드
  • 게시일 2024. 04. 23.
  • We build a Generatively Pretrained Transformer (GPT), following the paper "Attention is All You Need" and OpenAI's GPT-2 / GPT-3. We talk about connections to ChatGPT, which has taken the world by storm. We watch GitHub Copilot, itself a GPT, help us write a GPT (meta :D!) . I recommend people watch the earlier makemore videos to get comfortable with the autoregressive language modeling framework and basics of tensors and PyTorch nn, which we take for granted in this video.
    Links:
    - Google colab for the video: colab.research.google.com/dri...
    - GitHub repo for the video: github.com/karpathy/ng-video-...
    - Playlist of the whole Zero to Hero series so far: • The spelled-out intro ...
    - nanoGPT repo: github.com/karpathy/nanoGPT
    - my website: karpathy.ai
    - my twitter: / karpathy
    - our Discord channel: / discord
    Supplementary links:
    - Attention is All You Need paper: arxiv.org/abs/1706.03762
    - OpenAI GPT-3 paper: arxiv.org/abs/2005.14165
    - OpenAI ChatGPT blog post: openai.com/blog/chatgpt/
    - The GPU I'm training the model on is from Lambda GPU Cloud, I think the best and easiest way to spin up an on-demand GPU instance in the cloud that you can ssh to: lambdalabs.com . If you prefer to work in notebooks, I think the easiest path today is Google Colab.
    Suggested exercises:
    - EX1: The n-dimensional tensor mastery challenge: Combine the `Head` and `MultiHeadAttention` into one class that processes all the heads in parallel, treating the heads as another batch dimension (answer is in nanoGPT).
    - EX2: Train the GPT on your own dataset of choice! What other data could be fun to blabber on about? (A fun advanced suggestion if you like: train a GPT to do addition of two numbers, i.e. a+b=c. You may find it helpful to predict the digits of c in reverse order, as the typical addition algorithm (that you're hoping it learns) would proceed right to left too. You may want to modify the data loader to simply serve random problems and skip the generation of train.bin, val.bin. You may want to mask out the loss at the input positions of a+b that just specify the problem using y=-1 in the targets (see CrossEntropyLoss ignore_index). Does your Transformer learn to add? Once you have this, swole doge project: build a calculator clone in GPT, for all of +-*/. Not an easy problem. You may need Chain of Thought traces.)
    - EX3: Find a dataset that is very large, so large that you can't see a gap between train and val loss. Pretrain the transformer on this data, then initialize with that model and finetune it on tiny shakespeare with a smaller number of steps and lower learning rate. Can you obtain a lower validation loss by the use of pretraining?
    - EX4: Read some transformer papers and implement one additional feature or change that people seem to use. Does it improve the performance of your GPT?
    Chapters:
    00:00:00 intro: ChatGPT, Transformers, nanoGPT, Shakespeare
    baseline language modeling, code setup
    00:07:52 reading and exploring the data
    00:09:28 tokenization, train/val split
    00:14:27 data loader: batches of chunks of data
    00:22:11 simplest baseline: bigram language model, loss, generation
    00:34:53 training the bigram model
    00:38:00 port our code to a script
    Building the "self-attention"
    00:42:13 version 1: averaging past context with for loops, the weakest form of aggregation
    00:47:11 the trick in self-attention: matrix multiply as weighted aggregation
    00:51:54 version 2: using matrix multiply
    00:54:42 version 3: adding softmax
    00:58:26 minor code cleanup
    01:00:18 positional encoding
    01:02:00 THE CRUX OF THE VIDEO: version 4: self-attention
    01:11:38 note 1: attention as communication
    01:12:46 note 2: attention has no notion of space, operates over sets
    01:13:40 note 3: there is no communication across batch dimension
    01:14:14 note 4: encoder blocks vs. decoder blocks
    01:15:39 note 5: attention vs. self-attention vs. cross-attention
    01:16:56 note 6: "scaled" self-attention. why divide by sqrt(head_size)
    Building the Transformer
    01:19:11 inserting a single self-attention block to our network
    01:21:59 multi-headed self-attention
    01:24:25 feedforward layers of transformer block
    01:26:48 residual connections
    01:32:51 layernorm (and its relationship to our previous batchnorm)
    01:37:49 scaling up the model! creating a few variables. adding dropout
    Notes on Transformer
    01:42:39 encoder vs. decoder vs. both (?) Transformers
    01:46:22 super quick walkthrough of nanoGPT, batched multi-headed self-attention
    01:48:53 back to ChatGPT, GPT-3, pretraining vs. finetuning, RLHF
    01:54:32 conclusions
    Corrections:
    00:57:00 Oops "tokens from the future cannot communicate", not "past". Sorry! :)
    01:20:05 Oops I should be using the head_size for the normalization, not C
  • 과학기술

댓글 • 2.3K

  • @fgfanta
    @fgfanta 년 전 +5238

    Imagine being between your job at Tesla and your job at OpenAI, being a tad bored and, just for fun, dropping on KRplus the best introduction to deep-learning and NLP from scratch so far, for free. Amazing people do amazing things even for a hobby.

    • @crimpers5543
      @crimpers5543 년 전 +139

      he's probably bored at both of those jobs. once people get to high level director positions, they are far removed from the trenches of code. Lots of computer scientists have passion in actually writing and explaining code, not just managing things.

    • @aaronhpa
      @aaronhpa 년 전 +96

      and yet people still say socialism isn't viable when most of the great stuff in the internet was done for free/ without expectation of compensation

    • @shyvanatop4777
      @shyvanatop4777 년 전 +102

      @@aaronhpa free market developed the skills but sure man

    • @aaronhpa
      @aaronhpa 년 전 +46

      @@shyvanatop4777 did it? I think hard work and dedication by all this people did and not the ability of selling it.

    • @jayakrishnankr7501
      @jayakrishnankr7501 년 전 +65

      @@aaronhpa , it's all about incentives. Why would you ever do anything if you could get anything without effort? In a fictional utopia, socialism might be viable, but human beings don't work like that. For example, this platform came about because of capitalism. I think achieving a balance between the two would be the best. Like building, this platform came from capitalism content here is socialism, maybe something like that.

  • @8LFrank
    @8LFrank 년 전 +3727

    Living in a world where a world-class top guy posts a 2-hour video for free on how to make such cutting-edge stuff. I barely started this tutorial but at first I just wanted to say thank you mate!

    • @FobosLee
      @FobosLee 년 전 +43

      Wait. It’s him! I didn’t understand at first. Thought it was random IT KRplusr

    • @DavitBarbakadze
      @DavitBarbakadze 년 전 +7

      How did it go?

    • @Tinjinladakh
      @Tinjinladakh 년 전 +1

      hey jake, what should i do before learn programming, is all basic language is same or different. should i learn only python?

    • @ChrisSmith-lk2vq
      @ChrisSmith-lk2vq 년 전 +1

      Totally agree!!

    • @atlantic_love
      @atlantic_love 년 전 +12

      "Cutting edge"? The only cutting will be your job. Think before getting your panties all wet. The only people excited for this crap are investors, employers and failed programmers looking for some sort of edge.

  • @jamesfraser7394
    @jamesfraser7394 년 전 +703

    Wow! I knew nothing and now I am enlightened! I actually understand how this AI/ML model works now. As a near 70 year old that just started playing with Python, I am a living example of how effective this lecture is. My humble thanks to Andrej Karpathy for allowing to see into and understand this emerging new world.

    • @user-ks8xf3ie3g
      @user-ks8xf3ie3g 10 개월 전 +59

      Good for you youngster. 75 and will be doing this kind of thing till I drop ... Still run my technology company and doing contract work. Cheers.

    • @mrcharm767
      @mrcharm767 10 개월 전 +8

      what makes u learn these at age of 70?

    • @jamesfraser7394
      @jamesfraser7394 10 개월 전 +15

      @@mrcharm767 Want to analyze more stocks , the way I would, in a shorter time. ;)

    • @fawzishafei5565
      @fawzishafei5565 10 개월 전 +6

      @@mrcharm767 The sky is the limit.....!

    • @fmailscammer
      @fmailscammer 10 개월 전 +9

      I’m always excited to learn new things, hope I’m still learning at 70!

  • @BAIR68
    @BAIR68 4 개월 전 +183

    I am a college professor and learning GPT from Andrej. Every time I watch this video, I not only I learn the contents, also how to deliver any topic effectively. I would vote him as the "Best AI teacher in KRplus”. Salute to Andrej for his outstanding lectures.

    • @noadsensehere9195
      @noadsensehere9195 3 개월 전

      which university?

    • @bohanwang-nt7qz
      @bohanwang-nt7qz 2 개월 전

      Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!

  • @amazedsaint
    @amazedsaint 년 전 +849

    All other youtube videos: There is this amazing thing called ChatGPT
    Andrej: Hold my beer 🍺
    Seriously - we really appreciate your time and effort to create this Andrej. This will do a lot of good for humanity - by making the core concepts accessible to mere mortals.

    • @syedshoaibshafi4027
      @syedshoaibshafi4027 년 전

      u can do it more easily using lstm

    • @zuu2051
      @zuu2051 년 전 +14

      @@syedshoaibshafi4027 do you really saying that out and loud. dude is still living in 2010 🤣

    • @kevinremmy5812
      @kevinremmy5812 년 전

      lit😅

    • @redsnflr
      @redsnflr 년 전

      Mere mortals with at least basic programming and python knowledge, but yes.

    • @kemalware4912
      @kemalware4912 년 전 +1

      🍺

  • @softwaredevelopmentwiththo9648

    Thank you for taking the time to create these lectures. I am sure it takes a lot of time and effort to record and cut these. Your effort to level up the the community is greatly appreciated. Thanks Andrej.

  • @fslurrehman
    @fslurrehman 년 전 +93

    I knew only python, math and definitions of NN, GA, ML and DNN. In 2 hours, this lecture has not only given me the understanding of GPT model, but also taught me how to read AI papers and turn them into code, how to use pytoch, and tons of AI definitions. This is the best lecture and practical application on AI. Because it not only gives you an idea of DNN, but also give you code directly from research papers and a final product. Looking forward to more lectures like these. Thanks Andrej Karpathy.

    • @bohanwang-nt7qz
      @bohanwang-nt7qz 2 개월 전

      Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!

  • @I_am_who_I_am_who_I_am
    @I_am_who_I_am_who_I_am 24 일 전 +32

    I did something like this in 1993. I took a ling text and calculated the probability of one word (i worked with words, not tokens) being after another by parsing the full text.
    And I successfully created a single layer perceptron parrot which can spew almost meaningful sentences.
    My professors told me I should not pursue the neural network path because it's practically abandoned. I never trusted them. I'm glad to see neural networks' glorious comeback.
    Thank you Andrej Karpathy for what you have done for our industry and humanity by popularizing this.

  • @gokublack4832
    @gokublack4832 년 전 +234

    Wow! Having the ex-lead of ML at Tesla make tutorials on ML is amazing. Thank you for producing these resources!

  • @yusufsalk1136
    @yusufsalk1136 년 전 +528

    The best notification ever.

  • @rafaelsouza4575
    @rafaelsouza4575 년 전 +208

    I was always scared of Transformer's diagram. Honestly, I never understood how such schema could make sense until this day when Andrej enlightened us with his super teaching power. Thank you so much! Andrej, please save the day again by doing one more class about Stable Diffusion!! Please, you are the best!

    • @bohanwang-nt7qz
      @bohanwang-nt7qz 2 개월 전

      Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!

  • @ProductivityMo
    @ProductivityMo 년 전 +23

    Thank you Andrej! I can't imagine the amount of time and effort it took to put this 2 hour video together! Very very educational in breaking down how GPT is constructed. Would love to see a follow-up on tuning the model to answer questions on small scale!

  • @JainPuneet
    @JainPuneet 년 전 +817

    Andrej, I cannot comprehend how much effort you have put in making these videos. Humanity is thankful to you for making these publically available and educating us with your wisdom. One thing is to know the stuff and apply it in corp setting and another thing is to use that instead to educate millions for free. This is one of the best kind of charity a CS major can do. Kudos to you and thank you so much for doing this.

    • @vicyt007
      @vicyt007 년 전 +8

      Making this video is super simple for a specialist like him. It’s like creating a Hello World program for a computer scientist.

    • @JainPuneet
      @JainPuneet 년 전 +30

      @@vicyt007 I beg to differ. I am from the area and I can imagine how much time he must have spent offline to come up with the right abstraction.

    • @vicyt007
      @vicyt007 년 전 +1

      @@JainPuneet I agree that it took him some time to make this video, but I don’t believe it was a tough task.

    • @hpmv
      @hpmv 년 전 +15

      @@vicyt007 People who has expertise in an area aren't always good teachers. Being able to show others how it works in an organized, easy-to-understand manner is very tricky. On the surface it looks easy, but if you try doing a video like this yourself, chances are you'll find it much harder than you think.

    • @vicyt007
      @vicyt007 년 전 +1

      @@hpmv I Know it was not an easy task but at least he knows what he is saying, it’s just a matter of explaining concepts. He was a teacher for a long time, then it’s his job, that he is doing for free here !
      But in my opinion, this video did not target people with 0 knowledge in maths / ML / IA / Python, because in this case you must admit that it is quite hard to understand. But it was watched by nearly 2M people. Those people are not skilled correctly to understand. Briefly, I think that this video targeted skilled people but was watched by anybody. Why not ?

  • @antopolskiy
    @antopolskiy 년 전 +137

    It is difficult to comprehend how lucky we are to have you teaching us. Thank you, Andrej.

  • @aojiao3662
    @aojiao3662 4 개월 전 +15

    Most clear and intuitive and well explained transformer video I've ever seen. Watched it as if it were a tv show and that's how down-to-earth this video is. Shoutout to the man of legend.

  • @ShihgianLee
    @ShihgianLee 11 개월 전 +11

    This lecture answers ALL my questions from the 2017 Attention Is All You Need paper. I am alway curious about the code behind Transformer. This lecture quenched my curiosity with a colab to tinker with. Thank you so much for your effort and time in creating the lecture to spread the knowledge!

  • @JoseLopez-ox7sq
    @JoseLopez-ox7sq 년 전 +183

    This is simply fantastic. I think it would be beneficial for people learning to see the actual process of training, the graphs in W&B and how they can try to train something like this.

    • @AndrejKarpathy
      @AndrejKarpathy  년 전 +181

      makes sense, potentially the next video, this one was already getting into 2 hours so I wrapped things up, would rather not go too much over movie length.

    • @jdejota1029
      @jdejota1029 년 전 +72

      @@AndrejKarpathy Please don't bother to be over movie length, I enjoyed every minute of the video. It's the first time I attended a in depth class of what's under the hood of a model.

    • @nikitaandriievskyi3448
      @nikitaandriievskyi3448 년 전 +21

      @@AndrejKarpathy I think people would watch these videos even if they were 10 hours long, so don't worry about making them too long :)

    • @patpearce8221
      @patpearce8221 년 전 +8

      @@AndrejKarpathy don't listen to these sycophants. Size matters.

  • @meghanaiitb
    @meghanaiitb 년 전 +61

    What a feeling ! Just finished sitting on this for the weekend, building along and finally understanding Transformers. More than anything, a sense of fulfilment. Thanks Andrej.

  • @mmedina
    @mmedina 년 전 +3

    Just wanted to thank you for your efforts. The video is great! Clear, concise, and very understandable. The way you start from scratch, and little by little start building every block of the paper is just awesome. Thank you very much!

  • @user-co4op9ok4b
    @user-co4op9ok4b 9 개월 전 +29

    I cannot thank you enough for this material. I've been a spoken language technologist for 20 years and this plus your micro-grad and make more videos has given me a graduate level update in less than 10 hours. Astonishingly well-prepared and presented material. Thank you.

  • @rcuzzy
    @rcuzzy 년 전 +90

    Andrej, I know there is probably a million other things you could be working on or efforts you could put your mind towards, but seriously thank you for these videos, they are important, they matter, and are providing many of us with a foundation of which to learn, build. and understand A.I. from and how to develop these models further. Thank you again and please keep doing these

    • @reinhodl7377
      @reinhodl7377 년 전 +3

      Seriously, Andrej is just so very kind in his way of explaining things. His shakespeare LSTM article way back ("The Unreasonable Effectiveness of Recurrent Neural Networks") was what got me seriously into ML in the first place. And while i've since (professionally) moved to different development work unrelated to ML/AI, this is the exact kind of thing that hooks me back in. Andrej knows people watching this are not idiots and doesn't treat them as such, but at the same time fully understands how opaque even basic AI concepts can be if all you ever really interact with is pre-trained models. There's tons of value in explaining this stuff in such a practical way.

  • @Marius12358
    @Marius12358 년 전 +24

    I'm enjoying this whole series so much Andrej. They make me understand neural networks much better then anything so far in my Bachelor. As an older student that has a large incentive to be time efficient, this has been a gold send. Thank you so much!! :D

  • @nazgulizm
    @nazgulizm 9 개월 전 +5

    Thank you for taking the time and effort to share this, Andrej! This is of great help to lift the veil of abstractions that made it all seem inaccessible and opening up that world to ML/AI uninitiated like me. I don’t understand all of it yet but I’m now oriented and you’ve given me a lot of threads I can pull on.

  • @thegrumpydeveloper

    So happy to see Andrej back teaching more. His articles before Tesla were so illuminating and distilled complicated concepts into things we could all learn from. A true art. Amazing to see videos too.

  • @lkothari
    @lkothari 년 전 +7

    This was incredible Andrej! Really appreciate how you intersperse teaching a concept with coding and building step-by-step. This is the first of your videos that I have watched and I can't wait to watch all the others.

  • @zechordlord
    @zechordlord 년 전 +12

    Thanks so much for making this! I could grasp about 80% of everything with my programming/little bit of university-level machine learning background, but it does not feel like magic anymore. This format of hands-on coding along with the thought process behind it is way better than reading a paper and trying to piece things together.

  • @IllIl
    @IllIl 년 전 +7

    Dude, thank you so much for this. It was a seriously awesome dive into the implementation with great explanations along the way. I've read/watched a lot of ML content and this has got to be one of the clearest lectures I've come across - even better than the usual famous online uni lectures. Thank you! (And I'll be rewatching it too! :)

  • @Grey_197
    @Grey_197 10 개월 전 +9

    Broke my back just to finish this video in single sitting. Its a lot to take at once, i think I'll have to implement it bit by bit in a span of day to actually assimilate everything.
    I am very happy from the lecture/tutorial, waiting for more. Time and effort in making this video possible is highly admirable and respectable.
    Thank you Andrej.

  • @NicholasRenotte
    @NicholasRenotte 년 전 +83

    This is AMAZING! You're an absolute legend for sharing your knowledge so freely like this Andrej! I'm finally getting some time to get into transformer architectures this is a brilliant deep dive, going to spend the weekend walking through it!! Thank you🙏🏽

    • @varunahlawat9013
      @varunahlawat9013 년 전 +1

      Waiting for your take on this too!

    • @eliotharreau7627
      @eliotharreau7627 년 전 +1

      Hi Nicholas , I dont understand all this code . I just have one question is it working ?? And is it like ChatGPT ? Thnx Bro.

    • @kyriakospelekanos6355
      @kyriakospelekanos6355 년 전 +1

      @@eliotharreau7627 This is a demonstration of HOW chatgpt works

    • @eliotharreau7627
      @eliotharreau7627 년 전

      @@kyriakospelekanos6355 I think it is not only how ChatGPT work. But it s a code hoe can do LIKE ChatGPT. That's why I m surprise !!! Thank you anyway.

    • @satoshinakamoto5710
      @satoshinakamoto5710 년 전

      bro can't wait for your video on this!

  • @coemgeincraobhach236

    Day 2 of implementing this down, about one more evening to go I think. Thanks so much for this! I spent so long down the rabbit hole of CNNs that its really refreshing to try a completely different type of model. No way I could have done it without a lecture of this quality! Legend

  • @rangilanaoermajhi1820
    @rangilanaoermajhi1820 11 개월 전 +13

    Just gone through all of his videos - MLP, Gradients and of course the backprop :), and finally finishing with the transformer model (decoder part). As we all know Andrej is the hero of deep learning and we are very much blessed to get this much of rich contents for free in KRplus, also from a teacher like him. Fascinating staff from a fascinating contributor in the field of AI 🙏

  • @nikolaMKD95
    @nikolaMKD95 년 전 +92

    Wow. I thought you gonna use the Transformer library but you essentially build the entire transformer architecture from scratch. Well done!!

    • @gokulakrishnanr8414
      @gokulakrishnanr8414 개월 전

      Thanks! Yeah, it was a fun challenge building the Transformer from scratch. Glad you're enjoying the video!

  • @curatorsshelf393
    @curatorsshelf393 년 전 +5

    Andrej, Thank you so much for sharing your knowledge and expertise. I've been following your video series and it has been truly amazing. I remember you were saying in one of the interviews that to prepare 1hour video, it takes more than 10hrs. I cannot thank you enough for what you are doing!

  • @pastrop2003
    @pastrop2003 년 전 +3

    Thank you, Andrej, this is awesome! This is the best hands-on tutorial on the transformer-based language model I ever came across. It is very gracious of you to share your knowledge and experience.

  • @karanacharya18
    @karanacharya18 년 전 +4

    Absolutely amazing lecture. Thank you so much Andrej! I finally understand Attention and Transformers. "Code is the ultimate truth". And the way to set the stage and explain the concepts and the code is brilliant.

  • @scottsun345
    @scottsun345 년 전 +2

    Wow, this video and everything it covered are just amazing! There are no other words except, thank you, Andrej, for all the efforts it took to make this! Really look forward to more of your great ideas and contents!

  • @khalobert1588
    @khalobert1588 년 전 +39

    I think this man is a singularity, because the world has not seen such a combination of talent and good character. Thanks mate 🙏

  • @WannabeALU
    @WannabeALU 년 전 +46

    I don't have words to describe how grateful I am to you and the work you are doing. Thank you!

    • @klauszinser
      @klauszinser 년 전 +3

      The world has got a very good teacher back. Very appreciated.

    • @RKELERekhaye
      @RKELERekhaye 년 전

      Fantastic video Andre, your the best and so nice.😊

  • @footfunk510
    @footfunk510 10 개월 전

    This was amazing. Thank you, Andrej! I've read about the transformer architecture but watching this code walk-through really helped me understand what this looks like in an applied way. Pulling together code and the paper helped bring the theory and practice together.

  • @sr3090
    @sr3090 11 개월 전 +1

    Thank you Andrej for this wonderful session. I a tech enthusiast and wanted to understand how GPT works and came across your video. I have always found the research papers difficult to comprehend and never understood how they actually get implemented. Your video completely changed that. You are such a good teacher and make things so easy to understand. Your fan club just got a new member!! :)

  • @miladaghajohari2308

    These videos are awesome. It has been 3 years that I am doing DL research but the way you explain things is so pleasing that I sit through the whole 2 hours. Kudos to you Andrej.

  • @HazemAzim
    @HazemAzim 년 전 +3

    WoW .. Very comprehensive and smooth . You went through almost every detail in an excellent educational manner . This surely needed a lot of effort . I have seen many videos on transformers some of them are really very good in explaining the concepts and the math behind. but in terms of SW implementation , on how transformers work from a code perspective , this is by far the best I have seen . Thank you

  • @clamr6122
    @clamr6122 3 일 전 +1

    I've watched a lot of explanations of Transformers and this is easily the best. You are a gifted teacher.

  • @matteofogliata21
    @matteofogliata21 7 개월 전

    I've just started approaching the transformer architecture in the last two days, and I think this is by far the best explanation. It's well thought, giving all the hints, intuitions and demonstrations with simple code. Thank you Andrej!

  • @JamesBradyGames
    @JamesBradyGames 년 전 +58

    What a wonderful gift to the world. Amazing tutorial. Again. Thank you!

  • @PrakharSrivastav
    @PrakharSrivastav 년 전 +9

    Truly phenomenal to live in an age where we can learn all this for free from experts like you. Thank you so much Andrej for your contribution. What a gift you have given.

  • @artukikemty
    @artukikemty 년 전 +15

    Amazing, watching these videos I can still believe in human kind, seeing a guy like Andrej sharing his knowledge and his time with the rest of the world is something that we do not see every day. Thanks for posting it!

    • @jwalk121
      @jwalk121 11 개월 전

      He's a very good teacher, but there are still islands

  • @armaankhokhar7651
    @armaankhokhar7651 년 전 +2

    Your playlist has been instrumental to my learning and incredibly motivating. Please keep posting!

  • @ayushsrivastava3879

    Thank you for taking the time to create these lectures.
    I'll be the first to buy if you ever want to do a subscription plan.
    Honestly, I learned so much more from this playlist alone than from any other documentation or blogs combined.
    Working with NLP is now entirely different for me.
    I'll work hard to work with you one day.

  • @rw-kb9qv
    @rw-kb9qv 년 전 +6

    I think this style of teaching is much better than a lecture with powerpoint and whiteboard. This way you can actually see what the code is doing instead of guessing what all the math symbols mean. So thank you very much for this video!

    • @13thbiosphere
      @13thbiosphere 년 전

      By 2030 will be the dominant method of learning..... Varsity more efficient..... Any University failing to embrace this method will crumble

  • @aureliencobb199
    @aureliencobb199 년 전 +8

    Giving us these lectures for free. I do not know how to thank you. Great job explaining to us NN so clearly.

  • @lipingxiong1376
    @lipingxiong1376 년 전 +27

    Thank you so much for creating such valuable content. A few years ago, I watched your 2016 Stanford computer vision course, which was instrumental in helping me understand backpropagation and other important neural network concepts. Andrew Ng's courses initially led me into the world of machine learning, but I find your videos to be equally educational, focused on fundamental concepts, and presented in a very accessible way. I've also been following your blog and was thrilled to learn about your new KRplus channel. Your dedication to creating these resources is truly appreciated.
    Growing up in rural China, I didn't have many opportunities to learn outside of textbooks. But now, thanks to people like you, I find myself swimming in a sea of knowledge. Thank you for making such a significant impact on my learning journey.
    BTW, I edited this with chatGPT to make me sounds more like a native speaker. :)

    • @eva__4380
      @eva__4380 11 개월 전 +3

      Similar experience here . I too watched Stanfords computer vision and Nlp and a few other courses a while back. I also did lectures of linear algebra,calc, probability stats etc from mit ocw to have a strong grasp of the fundamentals . Without KRplus it wouldn't be possible for me to have access to such high quality education

    • @raghulponnusamy9034
      @raghulponnusamy9034 6 개월 전

      can you please share me that link @eva__4380

    • @bohanwang-nt7qz
      @bohanwang-nt7qz 2 개월 전

      Hey, I'd like to introduce you to my AI learning tool, Coursnap, designed for youtube courses! It provides course outlines and shorts, allowing you to grasp the essence of 1-hour in just 5 minutes. Give it a try and supercharge your learning efficiency!

  • @realaliarain
    @realaliarain 년 전 +3

    A bundle of thanks for this one.
    This mean so much for us. The community is thankful to you.
    Taking time for us to actually record this masterpieces.
    Thanks Andrej

  • @muhajerAlSabil1
    @muhajerAlSabil1 년 전 +14

    "Andrej , your willingness to share your knowledge and insights on KRplus is truly inspiring. Your passion for teaching and helping others understand complex concepts is evident in your videos, and it's clear that you have a drive to make a positive impact in the field of AI. Keep up the amazing work, and thank you for making this knowledge accessible to all!" ps this comment was generated using GPT

  • @jasonrothfuss1631
    @jasonrothfuss1631 7 개월 전

    This video deserves two thumbs up (or more)! I spent a lot of time watching and rewatching parts of this, coding the model "the hard way", and it was totally worth it. Thank you!

  • @johnini
    @johnini 년 전 +1

    Sir, you have all our respect!
    You are a legend!! and anyone that had the chance to share a beer or coffee with you is a really lucky person!!
    Great video, mega clear, and I hope to see more soon about fine tuning, and further steps of training in the future :)

  • @chung-shienwang6248

    Can't be more grateful. We're literally living in the best of times because of you! Thank you so much

  • @rockapedra1130
    @rockapedra1130 년 전 +8

    This is fantastic. I am amazed that Andrej takes so much of his time to impart this incredibly valuable knowledge for free to all and sundry. He is not only a top researcher but also a fantastic communicator. We have gotten used to big corporations hoarding knowledge and talent to become exploitative monopolies but every so often, humanity puts forth a gem like Mr. Karpathy to keep us all from going head first into the gutter. Thank you!!!

  • @michaeldimattia9015
    @michaeldimattia9015 3 개월 전

    MIND = BLOWN! Not only is this incredible content, but the way everything was presented, coded, and explained is so crystal clear, my mind felt comfortable with the complexity. Amazing tutorial, and incredibly inspiring, thanks so much!

  • @user-vb2zw3gg2x
    @user-vb2zw3gg2x 7 개월 전

    ty for making such a logically smooth tutorial! it helps to see why we use such structure. it's also cool that you explian almost everything that appears in the model even tho they might have been classic in the field. very nice job bravo

  • @redfordkobayashi6936

    You just know someone has a deep grasp on the subject matter when they start dishing out "build X from scratch" on a regular basis. Thank you Karpathy for sharing your knowledge with the world. You are more than amazing.

  • @haleemaramzan
    @haleemaramzan 년 전 +4

    I built this same thing alongside watching the lecture, and loved it! I'm trying to get better at understanding and coding these concepts, and this was extremely helpful. Thank you so much :)

  • @jcmorlando
    @jcmorlando 11 개월 전 +1

    Simply amazing, thank you Andrej! Hands down the best resource I've consumed to understand how a Transformer is built, and get understanding of how it technically relates to GPT and ChatGPT. I feel like I'm taking my first step into real cutting-edge ML :)

  • @Kirby-Bernard
    @Kirby-Bernard 2 개월 전

    We are grateful that talented people like you believe in teaching and helping! This is an amazing video. Clear, precise, brings out a tough topic to a layperson! So much to learn on how to make technical videos.

  • @SchultzC
    @SchultzC 년 전 +4

    From CS231n and RL Pong to this… there is something special about the way you beak down and explain things. I have benefited immensely from it and I’m obviously not the only one. Thank You!

  • @juxyper
    @juxyper 년 전 +5

    I have some experience in understanding the maths behind all this stuff but I kind of had problems with advancing to creating and training models, these videos are a godsend. Big thanks

  • @mlock1000
    @mlock1000 2 개월 전 +2

    I only just noticed that this is set up in a perfect 2 column layout so a person can have the script/notebook they are working on side by side with yours and not have to jump around at all. And it's clean and clutter free. That is some classy action, my deepest respect and gratitude.

  • @fooger
    @fooger 년 전 +14

    As always fantastic video and sharing... Would be really cool if you would have a part II on this and how we could use PPO/RL to do the fine-tuning part of some basic interactive flow. doesn't have to be like ChatGPT (Q/A). Thank you so much Andrej for such amazing video
    !

  • @GPTBot1123
    @GPTBot1123 년 전 +44

    Ive watched this 3 times and I only understand about 80% of it 😂--a testament to how great Adrej is at explaining these models. I'm not a programmer by trade, so a lot of this is totally foreign to me.

  • @petervogt8309
    @petervogt8309 11 개월 전 +1

    Nothing new in this comment. Just want to say 'thank you!' for this amazing tutorial, ...and all the others! The completeness, the information density and pace, the choice of examples and language.... Everything is *just right* , delivered right from the heart and the mind!! Thank you so much Andrej, for taking your time to educate and inspire all of us.

  • @travelwithoutmoving5422
    @travelwithoutmoving5422 11 개월 전 +1

    Thanks so much for your time, your contribution is invaluable and the way you explain things in small steps and great detail is unique, so precious when dealing with such complex topics like neural networks, especially for non mother language english speakers like me. Can't wait for your next vids. Big hug.

  • @aistamp
    @aistamp 년 전 +4

    Welcome to KRplus in 2023 where one of the top AI researchers is just casually making videos explaining in detail how to build some of the best ML models. Seriously though, these videos are amazing!

  • @starbuck5043
    @starbuck5043 년 전 +48

    We live in a time where we can get free lessons on hot topics from one of the best engineers in the business. This is amazing. Thanks, Andrej !

  • @nicorauseo5478
    @nicorauseo5478 개월 전

    Just finished all the lectures so far from the makemore series to this one and my knowledge grew drastically. I went from a stage of knowing the basics to be really comfortable to look under the hood. Definitely going to use this knowledge now to build useful projects. Thank you so much and i'm exited to keep learning form you. 🔥🔥🔥

  • @tranhuyhoang8610
    @tranhuyhoang8610 8 개월 전

    Thanks Andrej for the great video. I like how you manage to explain those technical terms in ML using very simple language. It takes a huge amount of experience and knowledge to do so.

  • @8eck
    @8eck 년 전 +5

    Reward model and reinforcement learning using that reward model would be super cool to learn. Thank you for the current lecture!

  • @alexandrechikhaoui659

    Amazing content, was in the quest for this ! I'm really grateful for your time and qualifications. Thank you Sir !

  • @chrisw4562
    @chrisw4562 2 개월 전

    Thank you so much Andrej for your generosity, spending your valuable time on these lectures. This is absolutely amazing.

  • @tamilselvan9942
    @tamilselvan9942 9 개월 전 +3

    This is "insane amount of knowledge packed in a video of 2 hours". Hats Off Man!!

  • @ArunKumar-iz8bi
    @ArunKumar-iz8bi 년 전 +12

    Thanks a lot Andrej for making such good videos that explain core concepts of neural nets. It would be really helpful if you could make a tutorial/video on the entire workflow and the structured thought process you would follow to train a neural network end to end( to arrive at the final model to be used for production). I mean given a problem statement, how would you train a neural network to solve it , how do you design the experiments to choose the right set of hyperparameters and so on. A hands on tutorial video which would demonstrate this process would definitely help a lot of practitioners trying to use neural networks to solve interesting problems

  • @aruncjohn
    @aruncjohn 10 개월 전

    Thank you, Andrej!...for sharing such an insightful video lecture on GPT architecture! Your clear explanations and in-depth analysis really helped me grasp the intricacies of this fascinating topic. I appreciate your expertise and the effort you put into making complex concepts accessible to the wider audience. Looking forward to more enlightening content from you!

  • @gennarofarina94
    @gennarofarina94 3 개월 전 +2

    This explanation is a masterpiece.
    You seem to have a lot of fun too by unveiling concepts (like cross attention) 👏

  • @AIlysAI
    @AIlysAI 년 전 +3

    I dont usually put a comment for any video, but Andrei is simplifying these concepts so easily to understand, is just shows how great he grasps transformers and 100s of papers he summarized in one video, it comes from years of experience and a beautiful mind!

  • @christianhetling3793

    Hey Andrej i greatly appreciate you making these videos. Next semester i am taking the course Machine learning for nlp. I think these kinds of implementation videos are incredible for learning a subject deeply

  • @yigalkassel8456
    @yigalkassel8456 24 일 전

    It's probably the best KRplus video I've ever seen.
    So down to earth explanations yet going really deep to the cutting edge tech of transformers.
    WOW
    Thank you for that!

  • @leewenyeong9892
    @leewenyeong9892 9 개월 전 +1

    Followed you since you were in your badmephisto days… didn’t know you’d make it this far and I really applaud for all your hard work! Thank you for making me interested in both speedcubing and deep learning ❤

  • @DJ-lo8qj
    @DJ-lo8qj 년 전 +4

    The students at Stanford who had Andrej as a professor are incredibly lucky; he’s an excellent teacher, breaking down complex topics with high precision and fluidity.

  • @mcnica89
    @mcnica89 년 전 +9

    Just finished watching this (at 2x speed). I love how hands on this is...every other tutorial I have seen always has a step where they say "its roughly like this...." but this one really shows you what is actually needed to make it work. Looking forward to trying this on some fun problems!

  • @superweirdo7
    @superweirdo7 년 전 +1

    Andrej, amazing work! The whole series is brilliant! Hope more is coming!

  • @senatorpoopypants7182
    @senatorpoopypants7182 10 개월 전

    by far the best video in deep learning application I've come across. For some one brand new in the space I'm shocked as to how much I'm following along with such advanced ideas. Thank you so much for putting this out there. It has been tremendously helpful.

  • @thedark3612
    @thedark3612 년 전 +3

    Please keep doing what you are doing! You are an absolute gem of an educator

  • @kelele4266
    @kelele4266 년 전 +2

    A follow-up video on the fine-tuning stage will be priceless indeed!! I've heard multiple NLP friends say that the key thing that enabled ChatGPT was the curated dataset internal to OpenAI. Super curious to hear what people think. I'd imagine that it was the dataset + fine-tuning (much more so than pre-training since it's a much smaller model vs. GPT-3; and most models use some kind of Transformer architecture).Thank you so much, Andrej!

  • @iantaggart3064
    @iantaggart3064 3 개월 전 +2

    The first ten minutes alone taught me more than a quick google search could. You're good at this.

  • @gonzalocordova5934

    Without a doubt the best video I've seen on transformers. Simply THANK YOU for your talent and humility teaching random people

  • @linkin543210
    @linkin543210 년 전 +4

    andrej is single handedly putting the open in openai

  • @1gogo76
    @1gogo76 10 개월 전 +4

    Andrej is pure genius wrapped in a humble person 🙌

  • @ayogheswaran9270
    @ayogheswaran9270 11 개월 전 +1

    @Andrej. Thanks a lot for making this. I was having trouble how layer norm (in which part pre, post) was implemented, how dropout was implemented and on which parts. This video helped me clarify a lot of things.
    Please please continue making such amazing videos.

  • @EntangleIT
    @EntangleIT 10 개월 전

    Thanks so much for taking the time to make this series. It's been incredibly helpful!

  • @ankile
    @ankile 년 전 +4

    It would be incredibly cool to see a very simple implementation of the second fine-tuning phase! Good lessons in RL to be had for sure :)

  • @RogerBarraud
    @RogerBarraud 년 전 +3

    To my great surprise I understood most of this at at least a conceptual level.
    [Probably helps that I watched Stanford EE263 and MIT Gilbert Strang Linear Algebra videos already 🙂]
    Thanks very much for this, Andrej!