![Visually Explained](/img/default-banner.jpg)
- Видео 20
- Просмотров 2 151 575
Visually Explained
Добавлен 28 авг 2013
Machine Learning and Optimization videos with a strong emphasis on building intuition with visual explanations.
The Kernel Trick in Support Vector Machine (SVM)
SVM can only produce linear boundaries between classes by default, which not enough for most machine learning applications. In order to get nonlinear boundaries, you have to pre-apply a nonlinear transformation to the data. The kernel trick allows you to bypass the need for specifying this nonlinear transformation explicitly. Instead, you specify a "kernel" function that directly describes how each points relate to each other. Kernels are much more fun to work with and come with important computational benefits.
---------------
Credit:
🐍 Manim and Python : github.com/3b1b/manim
🐵 Blender3D: www.blender.org/
🗒️ Emacs: www.gnu.org/software/emacs/
This video would not have been possible without th...
---------------
Credit:
🐍 Manim and Python : github.com/3b1b/manim
🐵 Blender3D: www.blender.org/
🗒️ Emacs: www.gnu.org/software/emacs/
This video would not have been possible without th...
Просмотров: 238 290
Видео
Goemans-Williamson Max-Cut Algorithm | The Practical Guide to Semidefinite Programming (4/4)
Просмотров 17 тыс.2 года назад
Fourth and last video of the Semidefinite Programming series. In this video, we will go over Goemans and Williamson's algorithm for the Max-Cut problem. Their algorithm, which is still state-of-the-art today, is one of the biggest breakthroughs in approximation theory. Remarkably, it is based on Semidefinite Programming. Python code included as usual. References: - Original paper by Goemans and...
Stability of Linear Dynamical Systems | The Practical Guide to Semidefinite Programming (3/4)
Просмотров 12 тыс.2 года назад
Third video of the Semidefinite Programming series. In this video, we will see how to use semidefinite programming to check whether a linear dynamical system is asymptotically stable. Thanks to Lyapunov's theory, this task can be reduced to searching for a so-called Lyapunov function. Python code included as usual. Timestamps: 0:00 Intro 0:18 Stability 1:58 Lyapunov 4:50 Python code Credit: 🐍 M...
The Practical Guide to Semidefinite Programming (2/4)
Просмотров 16 тыс.2 года назад
Second video of the Semidefinite Programming series. In this video, we will see how to use semidefinite programming to solve a toy geometry problem. Python code included. Timestamps: 0:00 Intro 0:41 Interesting Fact about Positive Semidefinite matrices 2:17 Let's solve this problem! 5:24 Semidefinite Programming Credit: 🐍 Manim and Python : github.com/3b1b/manim 🐵 Blender3D: www.blender.org/ 🗒️...
What Does It Mean For a Matrix to be POSITIVE? The Practical Guide to Semidefinite Programming(1/4)
Просмотров 32 тыс.2 года назад
Video series on the wonderful field of Semidefinite Programming and its applications. In this first part, we explore the question of how we can generalize the notion of positivity to matrices. Timestamps: 0:00 Intro 0:41 Questions 2:50 Definition 6:09 PSD vs eigenvalues 7:40 (Visual) examples Credit: 🐍 Manim and Python : github.com/3b1b/manim 🐵 Blender3D: www.blender.org/ 🗒️ Emacs: www.gnu.org/...
Linear Regression in 2 minutes
Просмотров 251 тыс.2 года назад
Linear Regression in 2 minutes. Credit: 🐍 Manim and Python : github.com/3b1b/manim 🐵 Blender3D: www.blender.org/ 🗒️ Emacs: www.gnu.org/software/emacs/ Music/Sound: www.bensound.com This video would not have been possible without the help of Gökçe Dayanıklı.
What is Linear Programming (LP)? (in 2 minutes)
Просмотров 16 тыс.2 года назад
Overview of Linear Programming in 2 minutes. Additional Information on the distinction between "Polynomial" vs "Strongly Polynomial" algorithms: An algorithm for solving LPs of the form max c^t x s.t. Ax \le b runs in polynomial time if its running time can be bounded by a polynomial D^r, where "r" is some integer, and D is the bit-size of the data of the problem, or in other words, D is the am...
Accelerate Gradient Descent with Momentum (in 3 minutes)
Просмотров 31 тыс.2 года назад
Learn how to use the idea of Momentum to accelerate Gradient Descent. References: - Lectures on Convex Optimization by Yuri Nesterov: link.springer.com/book/10.1007/978-3-319-91578-4 - Convex Optimization: Algorithms and Complexity by Sébastien Bubeck: arxiv.org/pdf/1405.4980.pdf - MIT Lecture by Gilbert Strang: ruclips.net/video/wrEcHhoJxjM/видео.html Timestamps: - 0:00 Intro - 1:00 Momentum G...
The Unreasonable Effectiveness of Stochastic Gradient Descent (in 3 minutes)
Просмотров 59 тыс.2 года назад
Visual and intuitive Overview of stochastic gradient descent in 3 minutes. References: - The third explanation is from here: arxiv.org/abs/1802.06175 - Other references mentioned in the video: arxiv.org/abs/1509.01240, proceedings.mlr.press/v40/Ge15.pdf - AI plays hide and seek: openai.com/blog/emergent-tool-use/ - AI plays Dota 2: openai.com/five/ - InterFaceGAN: ruclips.net/video/uoftpl3Bj6w/...
Gradient Descent in 3 minutes
Просмотров 164 тыс.2 года назад
Visual and intuitive overview of the Gradient Descent algorithm. This simple algorithm is the backbone of most machine learning applications. References: - AI plays hide and seek: openai.com/blog/emergent-tool-use/ - AI plays Dota 2: openai.com/five/ - InterFaceGAN: ruclips.net/video/uoftpl3Bj6w/видео.html - Boyd and Vandenberghe's book on Convex Optimization (Sections 9.2 and 9.3): web.stanfor...
Principal Component Analysis (PCA)
Просмотров 190 тыс.2 года назад
This video is gentle and motivated introduction to Principal Component Analysis (PCA). We use PCA to analyze the 2021 World Happiness Report published 2021 and discover what makes countries truly happy. :) References: - Scikit-Learn User Guide : scikit-learn.org/stable/modules/decomposition.html#pca - A Tutorial on Principal Component Analysis: arxiv.org/abs/1404.1100 - Andrew Ng Stanford Cours...
Support Vector Machine (SVM) in 2 minutes
Просмотров 545 тыс.2 года назад
2-Minute crash course on Support Vector Machine, one of the simplest and most elegant classification methods in Machine Learning. Unlike neural networks, SVMs can work with very small datasets and are not prone to overfitting. This video would not have been possible without the help of Gökçe Dayanıklı.
Make Money Betting on Politics - Arbitrage with Predictit
Просмотров 8 тыс.2 года назад
Step-by-step tutorial to find and profit from arbitrage opportunities on Predictit. Predictit is market with unique attributes that makes it the perfect place for arbitrage: it is closed to big players like banks and hedge funds, and it lets you bet on political outcomes. - PredictIt arbitrage calculator: visuallyexplained.xyz/predictit-arbitrage-calculator/ - PredictIt: www.predictit.org/ 0:00...
The Karush-Kuhn-Tucker (KKT) Conditions and the Interior Point Method for Convex Optimization
Просмотров 111 тыс.2 года назад
A gentle and visual introduction to the topic of Convex Optimization (part 3/3). In this video, we continue the discussion on the principle of duality, which ultimately leads us to the "interior point method" in optimization. Along the way, we derive the celebrated Karush-Kuhn-Tucker (KKT) conditions. This is the third video of the series. Part 1: What is (Mathematical) Optimization? (ruclips.n...
Convexity and The Principle of Duality
Просмотров 70 тыс.2 года назад
Convexity and The Principle of Duality
How many times must you roll a die to get a six?
Просмотров 13 тыс.3 года назад
How many times must you roll a die to get a six?
Visually Explained: Newton's Method in Optimization
Просмотров 94 тыс.3 года назад
Visually Explained: Newton's Method in Optimization
thank you.
classics 0:07 0:08 0:08
this kind of videos remind me of what internet is all about: sharing knowledge. Thanks for the content. I hope internet stopped here
Thank you. Perfectly simplified in 2 minutes. Now I can build on this basic understanding
At 17:34 you said, if g is positive and then log of negative quantity would be infinity. Is it correct? log of a negative quantity is not defined right.
That was amazing, thanks 😊
The least squares error example is beautiful!!!
This is such a great video on so many levels. May god bless the people who had a hand in making it 🙏🏻🙏🏻
Amazing explanation Thank you so much ☺️
hmm close but no cigar
Давно слежу за твоим каналом) рад что у тебя все збс и ты прогрессируешь)
We center the data to have a mean of 0, which allows us to match the form of the covariance matrix provided in the video
Nice video
Man, I just checked that you haven't uploaded any new videos since 2 years! Hope you're doing well and come back with these amazing videos <3
Да вроде кайфовый сайтик. Играл немного, но мне вполне понравилось)
I watched you video, you video is so informative and humble �� thank you for share your video, I follow you
Is the answer to follow up question#1 = 4 ?
Amazing video. Thanks!
@VisuallyExplained Is the answer 2nd follow-up question (the median value) 2 throws? For example, take 100 throws, out of these, 16.6 throws would yield 6 in first throw. Around 41.6 throws would yield 6 in their second attempts. And since we want the 50th throw (or rather avg of 50th and 51st), it would be 2.
I love it! Thanks for that. Can you share the code used for PCA in this video, please? I am trying repeat, but my results dont check with yours, I want to see where I'm going wrong (I didn't find it in the description on github). Thanks for the video.
I really liked the video and the visuals, but I think it would be better without the "generic music" in the background.
Thank you for taking the time to post your feedback, this is very useful for the growth of this channel!
at 4:57, "the happiest country seems to be the most balanced ones", seems wrong, it should be "the most power ones" ?
very nice animations, and well explained. But just to be a bit technical isn't what you described called "mini-batch gradient descent". Because for stochastic gradient descent don't we just use one training example per iteration?? 😅😅
You saved me for my Data mining exam tomorrow 🙏
rubbish video
Quick question: How do we choose the gamma parameter in the RBF kernel at 3:00? By, say, cross validation?
Basically NN is f*@k around untill you find a best possibile value
It's like being less restrictive keeps you from optimizing the wrong thing and getting stuck in the wrong valley (or hill for evolution). Feels a lot like how i kept trying to optimize being a nice guy bc there was some positive responses and without some chaos i never would have seen another valley of being a bad boy which has much less cost and better results
Just learn how to speak slwoly and you will have more views
Honestly didn't really help with my questions, but I didn't expect a 3 minute video to answer them. This was very well done, the visualization was great, and everything it touched on (while brief) was concise and accurate. Subbed. <3
Great video!
Amazing video! I could save a lot of time! Thank you very much.
sagapo<3
Brilliantly Genius!
how can v_k be used before it is calculated in the next line? How can one know the 'condition' if this is actual data and not a mathematical formula?
Perfect thank you
Perfect explanation thank you 🙏🏼
great explanation!
That's great!
3:39 It's possible to get less than 1/2 Max-Cut. If all nodes are the same color, that's 0 cuts. We would have to shuffle the list of nodes and split them equally for assignment. Independent assignments will get you something like coin-flip results without a 1/2 lower bound.
Isn’t the one who “goes first” the one on the inside?
Thank you for this clear explanation.
good but why the loud ass shopping music
Amazing video thank you
Awesome video! This is very very helpful (as I'm going to take the convex optimization class exam tomorrow...) However, I am a bit confused at around 6:30. You mentioned that the minimizer x goes first and the maximizer u goes second in the expression at 6:45. I think in mathematics, the expression is evaluated inside-first? So in this case the inner part, maximizer u, would be the first player, and the minimizer x would be the second. I'm not sure if I understand this correctly...
Awesome content and video edition, thank you so much. Do you have any advice to produce such kind of graphics and animation ?
Wow what an amazing video, i understood svm in 2 minutes, which I didn't watching other 15 minutes tutorial
I've finally known where the hell Lagrangian comes from Such a great video
sir do you use "MANIM" libray of python to create these beautiful animations in your great videos ?
loved it