- 332 Videos
- 6 880 069 조회수
CodeEmporium
United States
가입일: 2016. 05. 27.
Everything new and interesting in Machine Learning, Deep Learning, Data Science, & Artificial Intelligence. Hoping to build a community of data science geeks and talk about future tech! Projects demos and more! Subscribe for awesome videos :)
Hyper parameters - EXPLAINED!
Let's talk about hyper parameters and how they are used in neural networks and deep learning!
ABOUT ME
⭕ Subscribe: krplus.net/uCodeEmporium
📚 Medium Blog: medium.com/@dataemporium
💻 Github: github.com/ajhalthor
👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/
RESOURCES
[1] Code for Deep Learning 101 playlist: github.com/ajhalthor/deep-learning-101
PLAYLISTS FROM MY CHANNEL
⭕ Deep Learning 101: krplus.net/p/PLTl9hO2Oobd_NwyY_PeSYrYfsvHZnHGPU.html
⭕ Natural Language Processing 101: krplus.net/p/PLTl9hO2Oobd_bzXUpzKMKA3liq2kj6LfE.html
⭕ Reinforcement Learning 101: krplus.net/p/PLTl9hO2Oobd9kS--NgVz0EPNyEmygV1Ha.html&si=AuThDZJwG19cgTA8
Natural Language Processing 101: krplus.net/p/PLTl9hO2Oobd_bz...
ABOUT ME
⭕ Subscribe: krplus.net/uCodeEmporium
📚 Medium Blog: medium.com/@dataemporium
💻 Github: github.com/ajhalthor
👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/
RESOURCES
[1] Code for Deep Learning 101 playlist: github.com/ajhalthor/deep-learning-101
PLAYLISTS FROM MY CHANNEL
⭕ Deep Learning 101: krplus.net/p/PLTl9hO2Oobd_NwyY_PeSYrYfsvHZnHGPU.html
⭕ Natural Language Processing 101: krplus.net/p/PLTl9hO2Oobd_bzXUpzKMKA3liq2kj6LfE.html
⭕ Reinforcement Learning 101: krplus.net/p/PLTl9hO2Oobd9kS--NgVz0EPNyEmygV1Ha.html&si=AuThDZJwG19cgTA8
Natural Language Processing 101: krplus.net/p/PLTl9hO2Oobd_bz...
조회수: 802
비디오
NLP with Neural Networks | ngram to LLMs
조회수 1.9K개월 전
Transfer Learning - EXPLAINED!
조회수 2.5K2 개월 전
Embeddings - EXPLAINED!
조회수 3.8K2 개월 전
Loss functions in Neural Networks - EXPLAINED!
조회수 3.9K2 개월 전
Optimizers in Neural Networks - EXPLAINED!
조회수 1.9K3 개월 전
Activation functions in neural networks
조회수 2.1K3 개월 전
Backpropagation in Neural Networks - EXPLAINED!
조회수 2.5K3 개월 전
Building your first Neural Network
조회수 3.2K3 개월 전
Deep Q-Networks Explained!
조회수 12K4 개월 전
Monte Carlo in Reinforcement Learning
조회수 5K5 개월 전
Q-learning - Explained!
조회수 8K5 개월 전
Bellman Equation - Explained!
조회수 10K6 개월 전
Elements of Reinforcement Learning
조회수 6K6 개월 전
ChatGPT: Zero to Hero
조회수 3.7K7 개월 전
[ 100k Special ] Transformers: Zero to Hero
조회수 33K7 개월 전
20 papers to master Language modeling?
조회수 8K7 개월 전
Llama - EXPLAINED!
조회수 22K8 개월 전
How AI (like ChatGPT) understands word sequences.
조회수 2.9K8 개월 전
Convolution in NLP
조회수 4.1K9 개월 전
Word2Vec, GloVe, FastText- EXPLAINED!
조회수 15K10 개월 전
Word Embeddings - EXPLAINED!
조회수 11K10 개월 전
Great work indeed. Helped clear a lot of things especially the part where softmax is used for the decoder output. So the first row will output the target lang first word. But in scenarios where two source words resonate with one target lang word, how is softmax handled their? Can you please help me in figuring this out.
Really good explanation
did you know at that time how revolutionary this would be?
Answer for Quiz2: Option 'B' frank was updating Q values based on observed rewards from simulated episodes.
this video is great
Understood nothing about how this model works. Oversimplifications and storytelling makes it unpaired with the how the real thing work. Now I know : AE is reducing the input data into a smaller vector, VAE can generate blurry image. What I don't know : What is happening to input data and the dataset, what this pool intuition is for?
Thank you so much for making such great videos. It really helps for someone new to DS to quickly understand all the concepts. Appreciate explaining with actual codes and going through each step!
For the algo
Dear Sir, if I may have 2 questions here: 1) 7:25, how did you remove y_i as it's independent? yi can be opposite signs, how can it be removed like 1? 2) at 7:58 in matrix representation why you convert p(x_i) in different way? or it really doesn't matter, cuz you will substitute beta_i in sigmoid function at each iteration? Many Thanks!
Straight to the point. Nice and super clean explanation for non-linear activation functions. Thanks!
Thanks really very helpful resource for me! Keep rocking Ajay.
This is exactly what I was looking for. End to end explanation clearly showing the steps involved. Thanks a ton man!❤
hey i dont get what you mean in 6:29. why do you convert every single character rather than word? i think embeddings are for token/words rather than characters. could u pls make this clear?
are you sure you're not gay?
so underrated!
Thanks for the tutorial
Quiz 2: A, B, C
Am I the only one to notice that vsauce refrence 02:28
this aged well
I am just astonished mann I am a 1st year student from Bengaluru who stumbled upon on your channel while learning about the AI buzzterms and then i find out that ur a kannadiga as well, great man Although i am overwhelmed with the videos you make, i am just so happy that a guy from here can reach to this great extent. You are truly an inspiration brother !
Poor man’s 3blue1brown But nice explanation ❤
thank you sir... like from china
i dont get how the token is seleceted in top-k sample ? does it get randomly from the top-k?
Thanos snap hehe
gold
this is very good. thank you!
Great and simple explanation it's very helpful for me, but do you think this "multi head attention" could use for time series forecasting? and if it does what type attention will it be?
Could you also make a video about varioud ways to test the output of transformer models.
Great explanation sir! Thx a lot!
Thank you for amazing explaination
Great video!
B
My meta Raybans
I have struggled to find a good explanation of transfomers and your videos are just amazing. Please keep releasing new content about AI.
7.52
i love your shit man, this was so usefull i actually understood this ml shit and now can be elon musk up in this llm shit
sir neevu kannadigara wow?? do u work in usa sir ? or pursuing any degree?
Very useful video
I hate you for making that noises, i want to learn, comedia is something i would pass on
Good video!there is a small typo at the summary page about on-policy
@codebasics and @CodeEmporium are best channels to learn high level concepts they must collaborate
This is best . Thanks!
Xi ∈ ℝ^D "Xi" represents a specific customer, where "i" is an index referring to a particular customer. "∈" denotes membership, meaning "Xi" belongs to or is an element of. "ℝ^D" represents the set of real numbers raised to the power of "D," where "D" is the dimensionality of the feature space. This indicates that each customer is represented as a vector of real numbers with "D" dimensions. Each dimension might correspond to a specific feature or attribute of the customer, such as age, income, spending habits, etc. So, the equation "Xi ∈ ℝ^D" means that each customer "Xi" is represented as a vector of real numbers with "D" dimensions.
No way lstm existed in 1991
Fantastic video, Ajay! Just have two questions: 1. how many attention heads are optimal? For example, if there're 10 words in a sentence, is 10 a good number for attention heads? 2. does more multiple attention layers correspond to better performance?
well explained brother
Such an awesome video! Can't believe i hadn't made the connection between ridge and Lagrangians, literally has a lambda in it lol!
With the lasso intuition, the stepwise function you get for theta, how do you get the conditions on the right i.e. yi < lambda/2.I thought perhaps instead of writing theta < 0, you are just using the implied relationship between yi and lambda. E.g. that if theta < 0, and therefore |theta|.= - theta, which then after optimising gives theta = y - lambda/2 i.e. y = lambda/2 + theta, but then i get the opposite conditions as you...i.e. as theta is negative in this case wouldn't that give y = lambda/2 + theta < lambda/2?
"Control other effect through randomisation"
very good quality video!
Bro TBH no words to appreciate such a well structured video in a short time and the explanation was easly understandable even for people with less knowledge. Thanks for the video man.