Self Attention in Transformer Neural Networks (with Code!)

๊ณต์œ 
์†Œ์Šค ์ฝ”๋“œ
  • ๊ฒŒ์‹œ์ผ 2024. 04. 27.
  • Let's understand the intuition, math and code of Self Attention in Transformer Neural Networks
    ABOUT ME
    โญ• Subscribe: krplus.net/uCodeEmporiu...
    ๐Ÿ“š Medium Blog: / dataemporium
    ๐Ÿ’ป Github: github.com/ajhalthor
    ๐Ÿ‘” LinkedIn: / ajay-halthor-477974bb
    RESOURCES
    [ 1๐Ÿ”Ž] Code for video: github.com/ajhalthor/Transfor...
    [2 ๐Ÿ”Ž] Transformer Main Paper: arxiv.org/abs/1706.03762
    [3 ๐Ÿ”Ž] Bidirectional RNN Paper: deeplearning.cs.cmu.edu/F20/d...
    PLAYLISTS FROM MY CHANNEL
    โญ• ChatGPT Playlist of all other videos: โ€ข ChatGPT
    โญ• Transformer Neural Networks: โ€ข Natural Language Proce...
    โญ• Convolutional Neural Networks: โ€ข Convolution Neural Net...
    โญ• The Math You Should Know : โ€ข The Math You Should Know
    โญ• Probability Theory for Machine Learning: โ€ข Probability Theory for...
    โญ• Coding Machine Learning: โ€ข Code Machine Learning
    MATH COURSES (7 day free trial)
    ๐Ÿ“• Mathematics for Machine Learning: imp.i384100.net/MathML
    ๐Ÿ“• Calculus: imp.i384100.net/Calculus
    ๐Ÿ“• Statistics for Data Science: imp.i384100.net/AdvancedStati...
    ๐Ÿ“• Bayesian Statistics: imp.i384100.net/BayesianStati...
    ๐Ÿ“• Linear Algebra: imp.i384100.net/LinearAlgebra
    ๐Ÿ“• Probability: imp.i384100.net/Probability
    OTHER RELATED COURSES (7 day free trial)
    ๐Ÿ“• โญ Deep Learning Specialization: imp.i384100.net/Deep-Learning
    ๐Ÿ“• Python for Everybody: imp.i384100.net/python
    ๐Ÿ“• MLOps Course: imp.i384100.net/MLOps
    ๐Ÿ“• Natural Language Processing (NLP): imp.i384100.net/NLP
    ๐Ÿ“• Machine Learning in Production: imp.i384100.net/MLProduction
    ๐Ÿ“• Data Science Specialization: imp.i384100.net/DataScience
    ๐Ÿ“• Tensorflow: imp.i384100.net/Tensorflow
    TIMSTAMPS
    0:00 Introduction
    0:34 Recurrent Neural Networks Disadvantages
    2:12 Motivating Self Attention
    3:34 Transformer Overview
    7:03 Self Attention in Transformers
    7:32 Coding Self Attetion

๋Œ“๊ธ€ • 148

  • @CodeEmporium
    @CodeEmporium  ๋…„ ์ „ +55

    If you think I deserve it, please consider liking the video and subscribing for more content like this :)

    • @meguellatiyounes8659
      @meguellatiyounes8659 ๋…„ ์ „

      do have any idea how transformers generates new data ?

    • @15jorada
      @15jorada 11 ๊ฐœ์›” ์ „

      You are amazing man! Of course you deserve it! You are building transformers from the ground up! That's insane!

    • @vipinsou3170
      @vipinsou3170 7 ๊ฐœ์›” ์ „

      โ€‹@@meguellatiyounes8659using decoder ๐Ÿ˜ฎ๐Ÿ˜ฎ๐Ÿ˜Š

  • @nikkilin4396
    @nikkilin4396 2 ๊ฐœ์›” ์ „ +3

    It's one of the best videos I have watched. The concepts are explained very much, specially with codes.

  • @shailajashukla5841
    @shailajashukla5841 2 ๊ฐœ์›” ์ „

    Excellent , how well you explained. NO other video on youtube explained like this , Really done good job.

  • @rainmaker5199
    @rainmaker5199 11 ๊ฐœ์›” ์ „ +1

    This is great! I've been trying to learn attention but it's hard to get past the abstraction in a lot of the papers that mention it, much clearer this way!

  • @jeffrey5602
    @jeffrey5602 ๋…„ ์ „ +7

    What's important is that for every token generation step we always feed the whole sequence of previously generated tokens into the decoder, not just the last one. So you start with the token and generate now new token, then feed + into the decoder, so basically just appending the generated token to the sequence of decoder inputs. That might have not been clear in the video. Otherwise great work. Love your channel!

  • @user-gq5rl1kb7n
    @user-gq5rl1kb7n ๋…„ ์ „ +11

    I usually don't write comments, but this channel really deserves one! Thank you so much for such a great tutorial. I watched your first video about Transformers and the Attention mechanism, which was really informative, but this one is even more detailed and useful.

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „ +2

      Thanks so much for the compliments! This is the first in a series of videos called โ€œTransformers from scratch โ€œ. Hope youโ€™ll check the rest of the playlist out

  • @marktahu2932
    @marktahu2932 11 ๊ฐœ์›” ์ „

    I have learnt so much between yourself, ChatGPT, and Alexander & Ava Amini iat MIT 6.S191. Thank you all.

  • @tonywang7933
    @tonywang7933 ๋…„ ์ „

    Thank you so much, I searched so many places, this is the first place finally have a nice person willing to spend time really dig in step by step. I'm going to value this channel as good as Fireship now.

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „

      Thanks for the compliments and glad you are sticking around!

  • @srijeetful
    @srijeetful 2 ๊ฐœ์›” ์ „

    Extremely well explained. Kudos !!!!

  • @noahcasarotto-dinning1575
    @noahcasarotto-dinning1575 4 ๊ฐœ์›” ์ „

    Best video explaining this that ive seen by far

  • @SOFTWAREMASTER
    @SOFTWAREMASTER ๋…„ ์ „ +3

    I was legit searching for self attention concept vids and thinking that it sucked that you didn't cover it yet. And voila here we are. Thankyou so much for uploading!!

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „ +1

      Glad I could deliver. Will be uploading more such content shortly :)

  • @PraveenHN-zj3ny
    @PraveenHN-zj3ny 26 ์ผ ์ „ +2

    very happy to see kannada here
    Great ๐Ÿ˜Love from kannadigas

  • @pocco8388
    @pocco8388 8 ๊ฐœ์›” ์ „

    Best contents ever I've seen. Thanks for this video.

  • @dataflex4440
    @dataflex4440 ๋…„ ์ „

    This Has been a most wonderful series on this channel so far

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „

      Thanks a ton! Super glad you enjoyed the series :D

  • @ChrisCowherd
    @ChrisCowherd 7 ๊ฐœ์›” ์ „

    Fantastic explanation! Wow! You have a new subscriber. :) Keep up the great work

  • @prashantlawhatre7007
    @prashantlawhatre7007 ๋…„ ์ „ +2

    waiting for your future videos. This was amazing. especially the masked attention part.

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „ +2

      Thanks so much! Will be making more over the coming weeks

  • @rajpulapakura001
    @rajpulapakura001 5 ๊ฐœ์›” ์ „ +1

    This is exactly what I needed! Can't believe self-attention is that simple!

  • @becayebalde3820
    @becayebalde3820 6 ๊ฐœ์›” ์ „ +1

    This is pure gold man!
    Transformers are complex but this video really gives me hope.

    • @pratyushrao7979
      @pratyushrao7979 3 ๊ฐœ์›” ์ „

      What are the prerequisites for this video? Do we need to know about encoder decoder architecture before hand? The video feels like I jumped right in the middle of something without any context. I'm confused

    • @VadimChes
      @VadimChes 25 ์ผ ์ „

      โ€‹@pratyushrao7979 there are Playlists for different topics

  • @simonebonato5881
    @simonebonato5881 8 ๊ฐœ์›” ์ „

    One video to understand them all! Dude thanks I've tried to watch like 10 other videos on transformers and attention, yours was really super clear and much more intuitive!

    • @CodeEmporium
      @CodeEmporium  8 ๊ฐœ์›” ์ „

      Thanks so much for this compliment! Means a lot :)

  • @lawrencemacquarienousagi789

    Wonderful works you've done! I really love your video and have studied twice. Thank you so much!

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „

      Thanks so much for watching! More to come :)

  • @muskanmahajan04
    @muskanmahajan04 10 ๊ฐœ์›” ์ „

    The best explaination on the internet, thank you!

    • @CodeEmporium
      @CodeEmporium  10 ๊ฐœ์›” ์ „

      Thanks so much for the comment. Glad you liked it :)

  • @softwine91
    @softwine91 ๋…„ ์ „ +28

    What can I say, dude!
    God bless you
    This is the only content on the whole youtube that really explain the self-attention mechanism in a brilliant way.
    Thank you very much.
    I'd like to know if the key, query, and value matrixes are updated via backpropagation during the training phase.

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „ +2

      Thanks for the kind words. These matrices I mentioned in the code represent the actual data. So no. However, the 3 weight matrices that map a word vector to Q,K,V are indeed updated via backprop. Hope that lil nuance makes sense

    • @picassoofai4061
      @picassoofai4061 ๋…„ ์ „

      I definitely agree.

  • @MahirDaiyan7
    @MahirDaiyan7 ๋…„ ์ „

    Great! This is exactly what I was looking for in all of the other videos of yours

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „

      Thanks for the comment! There is more to come :)

  • @ayoghes2277
    @ayoghes2277 ๋…„ ์ „

    Thanks a lot for making the video!! This deserves more views.

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „

      Thanks for watching. Hope you enjoy the rest of the playlist as I code the entire transformer out !

  • @bradyshaffer3302
    @bradyshaffer3302 ๋…„ ์ „

    Thank you for this very clear and helpful demonstration!

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „

      You are so welcome! And be on the lookout for more :)

  • @shivamkaushik6637
    @shivamkaushik6637 ๋…„ ์ „

    With all my heart, you deserve a lot of respect
    Thanks for the content. Damn I missed my metro station because of you.

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „

      Hahahaha your words are too kind! Please check the rest of the Transformers from scratchโ€ playlist for more (itโ€™s fine to miss the metro for education lol)

  • @chrisogonas
    @chrisogonas 11 ๊ฐœ์›” ์ „

    Awesome! Well illustrated. Thanks

  • @chessfreak8813
    @chessfreak8813 5 ๊ฐœ์›” ์ „

    Thanks! U r very deserved and underdog!

  • @deepalisharma1327
    @deepalisharma1327 8 ๊ฐœ์›” ์ „

    Thank you for making this concept so easy to understand. Canโ€™t thank you enough ๐Ÿ˜Š

    • @CodeEmporium
      @CodeEmporium  8 ๊ฐœ์›” ์ „

      My pleasure. Thank you for watching

  • @pulkitmehta1795
    @pulkitmehta1795 11 ๊ฐœ์›” ์ „

    Simply wow..

  • @junior14536
    @junior14536 ๋…„ ์ „

    My god, that was amazing, you have a gift my friend;
    Love from Brazil :D

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „

      Thanks a ton :) Hope you enjoy the channel

  • @JBoy340a
    @JBoy340a ๋…„ ์ „

    Great walkthrough of the theory and then relating it to the code.

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „

      Thanks so much! Will be making more of these over the coming weeks

  • @amiralioghli8622
    @amiralioghli8622 7 ๊ฐœ์›” ์ „

    Thank you so much for taking the time to code and explain the transformer model in such detail, I followed your series from zeros to heros. You are amazing and, if possible please do a series on how transformers can be used for time series anomaly detection and forecasting. it is extremly necessary on yotube for somone!
    Thanks in advance.

  • @PaulKinlan
    @PaulKinlan ๋…„ ์ „

    This is brilliant, I've been looking for a bit more hands on demonstration of how the process is structured.

  • @sockmonkeyadam5414
    @sockmonkeyadam5414 11 ๊ฐœ์›” ์ „

    u have saved me. thank u.

  • @maximilianschlegel3216
    @maximilianschlegel3216 ๋…„ ์ „

    This is an incredible video, thank you!

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „

      Thanks so much for watching and commenting!

  • @SIADSrikanthB
    @SIADSrikanthB 21 ์ผ ์ „

    I really like how you use Kannada language examples in your explanations.

  • @mamo987
    @mamo987 ๋…„ ์ „

    Amazing work! Very glad I subscribed

  • @Slayer-dan
    @Slayer-dan ๋…„ ์ „ +2

    Huge respect โค๏ธ

  • @nandiniloomba
    @nandiniloomba ๋…„ ์ „

    Thank you for teaching this.โค

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „

      My pleasure! Hope you enjoy the series

  • @rajv4509
    @rajv4509 ๋…„ ์ „

    Absolutely brilliant! Thumba chennagidhay :)

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „

      Thanks a ton! Super glad you like this. I hope you like the rest of this series :)

  • @AI-xe4fg
    @AI-xe4fg ๋…„ ์ „

    Good video Bro.
    Studying Transformer this week but still a little confused before I met your video.
    Thanks

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „

      Thanks for the kind words. I really appreciate it :)

  • @yonahcitron226
    @yonahcitron226 9 ๊ฐœ์›” ์ „

    this is amazing!

  • @sriramayeshwanth9789
    @sriramayeshwanth9789 7 ๊ฐœ์›” ์ „

    you made me cry brother

  • @paull923
    @paull923 ๋…„ ์ „

    Thx for your efforts!

  • @shaktisd
    @shaktisd 4 ๊ฐœ์›” ์ „

    Excellent video . If you can please make a hello world on self attention like first showing pca representation before self attention and after self attention to show how context impacts the overall embedding

  • @varungowtham3002
    @varungowtham3002 ๋…„ ์ „

    เฒจเฒฎเฒธเณเฒ•เฒพเฒฐ เฒ…เฒœเฒฏเณ, เฒจเณ€เฒตเณ เฒ•เฒจเณเฒจเฒกเฒฟเฒ— เฒŽเฒ‚เฒฆเณ เฒคเฒฟเฒณเฒฟเฒฆเณ เฒคเณเฒ‚เฒฌ เฒธเฒ‚เฒคเณ‹เฒทเฒตเฒพเฒฏเฒฟเฒคเณ! เฒจเฒฟเฒฎเณเฒฎ เฒตเฒฟเฒกเฒฟเฒฏเณ‹เฒ—เฒณเณ เฒคเณเฒ‚เฒฌ เฒšเฒจเณเฒจเฒพเฒ—เฒฟ เฒฎเณ‚เฒกเฒฟเฒฌเฒฐเณเฒคเณเฒคเฒฟเฒตเณ†.

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „

      Glad you liked this and thanks for watching! :)

  • @jamesjang8389
    @jamesjang8389 5 ๊ฐœ์›” ์ „

    Amazing video! Thank you๐Ÿ˜Š๐Ÿ˜Š

  • @dataflex4440
    @dataflex4440 ๋…„ ์ „

    Brilliant Mate

  • @jazonsamillano
    @jazonsamillano ๋…„ ์ „

    Great video. Thank you very much.

  • @faiazahsan6774
    @faiazahsan6774 ๋…„ ์ „

    Thank you for explaining in such an easy way. It would be great if you could upload some codes on GCN algorithm.

  • @yijingcui7736
    @yijingcui7736 4 ๊ฐœ์›” ์ „

    this is very helpful

  • @FelLoss0
    @FelLoss0 9 ๊ฐœ์›” ์ „ +1

    Dear Ajay. Thank you so much for your videos!
    I have a quick question here. Why did you transpose the values in the softmax function? Also... why did you specify axis=-1? I'm a newbie at this and I'd like to have strong and clear foundations.
    have a lovely weekend :D

  • @li-pingho1441
    @li-pingho1441 ๋…„ ์ „

    you save my life!!!!!

  • @pranayrungta
    @pranayrungta 10 ๊ฐœ์›” ์ „ +1

    Your videos are way better than Stanford lecture cs224n

    • @CodeEmporium
      @CodeEmporium  10 ๊ฐœ์›” ์ „

      Words I am not worthy of. Thank you :)

  • @picassoofai4061
    @picassoofai4061 ๋…„ ์ „

    Mashallah, man you are a rocket.

  • @arunganesan1559
    @arunganesan1559 ๋…„ ์ „ +1

    Thanks!

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „

      Thanks for the donation! And you are very welcome!

  • @gabrielnilo6101
    @gabrielnilo6101 11 ๊ฐœ์›” ์ „

    I stop the video sometimes and roll it back some seconds to hear you explaining something again and I am like: "No way that this works, this is insane", some explanations on AI techniques are not enough and yours are truly simple and easy to understand, thank you.
    Do you collab with anyone when making these videos, or is it done all by yourself?

    • @CodeEmporium
      @CodeEmporium  11 ๊ฐœ์›” ์ „ +3

      Haha yea. Things arenโ€™t actually super complicated. :) I make these videos on my own. Scripting, coding, research, editing. Fun stuff

  • @rujutaawate5412
    @rujutaawate5412 9 ๊ฐœ์›” ์ „

    Thanks, @CodeEmporium / Ajay for the great explanation!
    One quick question- can you please explain how the true values of Q, K, and V are actually computed? I understand that we start with random initialization but do these get updated through something like backpropagation? If you already have a video of this then would be great if you can state the name/redirect!
    Thanks once again for helping me speed up my AI journey! :)

    • @CodeEmporium
      @CodeEmporium  9 ๊ฐœ์›” ์ „

      That's correct back prop will update these weights. For exact details, you can continue watching this playlist "Transformers From Scratch" where we will build a working transformer. This video was the first in that series. Hope you enjoy it :)

  • @Slayer-dan
    @Slayer-dan ๋…„ ์ „

    Ustad ๐Ÿ™

  • @imagiro1
    @imagiro1 8 ๊ฐœ์›” ์ „

    Got it, thank you very much, but one question: What I still don't understand: We are talking about neural networks, and they are trained. So all the math you show here, how do we (know|make sure) that it actually happens inside the network? You don't train specific regions of the NN to specific tasks (like calculating a dot product), right?

  • @naziadana7885
    @naziadana7885 ๋…„ ์ „

    Thank you very much for this great video! Can you please upload a video on Self Attention code using Graph Convolutional Network (GCN)?!

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „

      Iโ€™ll look into this at some point. Thanks for the tips.

  • @dickewurstfinger9093
    @dickewurstfinger9093 3 ๊ฐœ์›” ์ „

    really great video, but why have the Q, K, V Vektors dim 8? i know its random in this video but what does the values in the vektors say about the word? or is it just to "identify" a word in a certain room like in word embeddings and give it a certain "id" ?

    • @CodeEmporium
      @CodeEmporium  3 ๊ฐœ์›” ์ „

      The choice of 8 heads in multi head attention is simple the choice of a hyper parameter in the main paper. This might be the number they experimented with that got reasonable results. That said, I am confident you shouldnโ€™t see drastic differences with small fluctuations of this number.
      Further, I feel like powers of 2 (such as 1,2,4,8,16,32) are usually tried out as these hyper parameters. But as mentioned before, numbers in between may work just as well. I think itโ€™s about having enough heads to capture complexity but not too many for slow processing

  • @ritviktyagi9221
    @ritviktyagi9221 ๋…„ ์ „ +1

    How did we get the values of q, k and v vectors after initializing them as randoms. Great video btw. Waiting for more such videos.

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „ +2

      The weight matrices that map the original word vectors to these 3 vectors are trainable parameters. So they would be updated by back propagation during training

    • @ritviktyagi9221
      @ritviktyagi9221 ๋…„ ์ „

      @@CodeEmporium Thanks for clarification

  • @virtualphilosophyjourney8897
    @virtualphilosophyjourney8897 3 ๊ฐœ์›” ์ „

    which phase does the model take the pretrianed info to decide the output?

  • @bhavyageethika4560
    @bhavyageethika4560 6 ๊ฐœ์›” ์ „ +1

    why is it d_k in both Q and K in the np.random.randn ?

  • @creativeuser9086
    @creativeuser9086 11 ๊ฐœ์›” ์ „ +4

    how do we actually choose the dimensions of Q, K and V? Also, are they parameters that are fixed for each word in the English language, and do we get them from training the model? That part is a little confusing since you just mentioned that Q, V and K are initialized at random, so I assume they have to change in the training of the model.

  • @wishIKnewHowToLove
    @wishIKnewHowToLove ๋…„ ์ „

    thx

  • @7_bairapraveen928
    @7_bairapraveen928 ๋…„ ์ „

    why we need to stabilise the variance of attention vector with query and key vectors.

  • @govindkatyura7485
    @govindkatyura7485 ๋…„ ์ „

    I have a few doubts
    1. Do we use multiple ffnn after the attention layer? So suppose we have 100 input words for the encoder then 100 ffnn will get trained ? One for each of the word, i checked the source code but they were using only one, so I'm confused how one FFNN can handle multiple embedding specially with batch size.
    2. In decoder do we pass multiple input also, just like encoder layer specially in training part?

  • @klam77
    @klam77 ๋…„ ์ „ +1

    "query" , "key" , and "value" terms come from the world of databases! So how do individual words in "My name is Ajay" each map to their own query and key and value semantically? that remains a bit foggy. i know you've shown random numbers in the example, but is there any semantic meaning to it? is this the "embeddings" of the LLM?

  • @SnehaSharma-nl9do
    @SnehaSharma-nl9do 2 ๊ฐœ์›” ์ „ +2

    Kannada Represent!! ๐Ÿ–

  • @McMurchie
    @McMurchie ๋…„ ์ „

    Hi I noticed this has been added to the transformer playlist, but there are 2 unavailable tracks - do i need them in order to get the full end to end grasp?

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „

      You can follow the order of โ€œtransformers from scratchโ€ playlist. This should be the first video in the series. Hope this helps and thanks for watching ! (Itโ€™s still being created so you can follow along :) )

  • @josephpark2093
    @josephpark2093 9 ๊ฐœ์›” ์ „

    I watched the video around 3 times but I still don't understand.
    Why are these awesome videos so unknown?

  • @anwarulislam6823
    @anwarulislam6823 ๋…„ ์ „ +1

    How could someone hack my brain wave and convoluted this by evaluate inner voice?
    May I know this procedure?
    #Thanks

    • @SOFTWAREMASTER
      @SOFTWAREMASTER ๋…„ ์ „ +1

      Haha ikr. I felt the same. Was looking for a good Self attention video.

  • @ayush_stha
    @ayush_stha ๋…„ ์ „

    In the demonstration, you generated the q, k & v vectors randomly, but in reality, what will the actual source of those values be?

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „

      Each of the q,k,v vectors will be a function of each word (or byte pair encoding) in the sentences. I say a โ€œfunctionโ€ of the sentences since to the word vectors, we add position encoding and then convert into q,k,v vectors via feed forward layers. Some of the later videos in this โ€œTransformers from scratchโ€playlist show some code on exactly how itโ€™s created. So you can check those out for more intel :)

  • @philhamilton3946
    @philhamilton3946 ๋…„ ์ „

    What is the name of the text book you are using?

    • @klam77
      @klam77 ๋…„ ์ „

      if u watch the vid carefully, the url shows the books are "online" free access bibles of the field.

  • @ajaytaneja111
    @ajaytaneja111 11 ๊ฐœ์›” ์ „

    Ajay, I don't think the point of capturing the context in terms of words 'after' has a significance in language modelling. In language modelling you are predicting only the next word. Yes, for a task like machine translation, yes. Thus I don't think Bi-directional RNNs have anything better to offer for language modelling than the regular (one-way) RNNs. . Let me know what you think

  • @jonfe
    @jonfe ๋…„ ์ „

    i still dont understand the difference between Q K V, can someone explain?

  • @sometimesdchordstrikes...7876
    @sometimesdchordstrikes...7876 ๊ฐœ์›” ์ „

    @1:41 here you have said that you want the context of the words that will be coming in the future but in masking part of the video you have said that it will be cheating know the context of the words that will be coming in the future

  • @NK-ju6ns
    @NK-ju6ns ๋…„ ์ „

    I felt the q, k, v parameter is not explained very well.. similar search analogy would be better to get a intuition of these parameter then explaining as what I can offer, what I actual offer

  • @thepresistence5935
    @thepresistence5935 ๋…„ ์ „

    Bro it's 100% better than your ppt vides

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „ +1

      Thanks so much! Just exploring different styles :)

  • @bkuls
    @bkuls ๋…„ ์ „ +1

    Guru aarama? Nanu kooda Kannada ne!

    • @CodeEmporium
      @CodeEmporium  ๋…„ ์ „

      Doin super well ma guy. Thanks for watching and commenting! :)

  • @kotcraftchannelukraine6118
    @kotcraftchannelukraine6118 5 ๊ฐœ์›” ์ „

    You forgot to show the most important thing, how to train self-attention with backpropagation? You forgot about backward pass

    • @CodeEmporium
      @CodeEmporium  5 ๊ฐœ์›” ์ „

      This is the first video in a series of videos called โ€œTransformers from scratchโ€. Later videos show how the entire architecture is training. Hope you enjoy the videos

    • @kotcraftchannelukraine6118
      @kotcraftchannelukraine6118 5 ๊ฐœ์›” ์ „

      @@CodeEmporium thank you, i subscribe

  • @thechoosen4240
    @thechoosen4240 7 ๊ฐœ์›” ์ „ +1

    Good job bro, JESUS IS COMING BACK VERY SOON; WATCH AND PREPARE

  • @ChethanaSomeone
    @ChethanaSomeone 11 ๊ฐœ์›” ์ „ +2

    Seriously, are u from karnataka ? your accent is so different dude.

  • @azursmile
    @azursmile ๊ฐœ์›” ์ „

    Lots of time on the mask, but none on training the attention matrix ๐Ÿค”

  • @venkatsahith6795
    @venkatsahith6795 6 ๊ฐœ์›” ์ „

    Bro why can't you encounter an example while explaining