Visually Explained
Visually Explained
  • Видео 20
  • Просмотров 2 151 575
The Kernel Trick in Support Vector Machine (SVM)
SVM can only produce linear boundaries between classes by default, which not enough for most machine learning applications. In order to get nonlinear boundaries, you have to pre-apply a nonlinear transformation to the data. The kernel trick allows you to bypass the need for specifying this nonlinear transformation explicitly. Instead, you specify a "kernel" function that directly describes how each points relate to each other. Kernels are much more fun to work with and come with important computational benefits.
---------------
Credit:
🐍 Manim and Python : github.com/3b1b/manim
🐵 Blender3D: www.blender.org/
🗒️ Emacs: www.gnu.org/software/emacs/
This video would not have been possible without th...
Просмотров: 238 290

Видео

Goemans-Williamson Max-Cut Algorithm | The Practical Guide to Semidefinite Programming (4/4)
Просмотров 17 тыс.2 года назад
Fourth and last video of the Semidefinite Programming series. In this video, we will go over Goemans and Williamson's algorithm for the Max-Cut problem. Their algorithm, which is still state-of-the-art today, is one of the biggest breakthroughs in approximation theory. Remarkably, it is based on Semidefinite Programming. Python code included as usual. References: - Original paper by Goemans and...
Stability of Linear Dynamical Systems | The Practical Guide to Semidefinite Programming (3/4)
Просмотров 12 тыс.2 года назад
Third video of the Semidefinite Programming series. In this video, we will see how to use semidefinite programming to check whether a linear dynamical system is asymptotically stable. Thanks to Lyapunov's theory, this task can be reduced to searching for a so-called Lyapunov function. Python code included as usual. Timestamps: 0:00 Intro 0:18 Stability 1:58 Lyapunov 4:50 Python code Credit: 🐍 M...
The Practical Guide to Semidefinite Programming (2/4)
Просмотров 16 тыс.2 года назад
Second video of the Semidefinite Programming series. In this video, we will see how to use semidefinite programming to solve a toy geometry problem. Python code included. Timestamps: 0:00 Intro 0:41 Interesting Fact about Positive Semidefinite matrices 2:17 Let's solve this problem! 5:24 Semidefinite Programming Credit: 🐍 Manim and Python : github.com/3b1b/manim 🐵 Blender3D: www.blender.org/ 🗒️...
What Does It Mean For a Matrix to be POSITIVE? The Practical Guide to Semidefinite Programming(1/4)
Просмотров 32 тыс.2 года назад
Video series on the wonderful field of Semidefinite Programming and its applications. In this first part, we explore the question of how we can generalize the notion of positivity to matrices. Timestamps: 0:00 Intro 0:41 Questions 2:50 Definition 6:09 PSD vs eigenvalues 7:40 (Visual) examples Credit: 🐍 Manim and Python : github.com/3b1b/manim 🐵 Blender3D: www.blender.org/ 🗒️ Emacs: www.gnu.org/...
Linear Regression in 2 minutes
Просмотров 251 тыс.2 года назад
Linear Regression in 2 minutes. Credit: 🐍 Manim and Python : github.com/3b1b/manim 🐵 Blender3D: www.blender.org/ 🗒️ Emacs: www.gnu.org/software/emacs/ Music/Sound: www.bensound.com This video would not have been possible without the help of Gökçe Dayanıklı.
What is Linear Programming (LP)? (in 2 minutes)
Просмотров 16 тыс.2 года назад
Overview of Linear Programming in 2 minutes. Additional Information on the distinction between "Polynomial" vs "Strongly Polynomial" algorithms: An algorithm for solving LPs of the form max c^t x s.t. Ax \le b runs in polynomial time if its running time can be bounded by a polynomial D^r, where "r" is some integer, and D is the bit-size of the data of the problem, or in other words, D is the am...
Accelerate Gradient Descent with Momentum (in 3 minutes)
Просмотров 31 тыс.2 года назад
Learn how to use the idea of Momentum to accelerate Gradient Descent. References: - Lectures on Convex Optimization by Yuri Nesterov: link.springer.com/book/10.1007/978-3-319-91578-4 - Convex Optimization: Algorithms and Complexity by Sébastien Bubeck: arxiv.org/pdf/1405.4980.pdf - MIT Lecture by Gilbert Strang: ruclips.net/video/wrEcHhoJxjM/видео.html Timestamps: - 0:00 Intro - 1:00 Momentum G...
The Unreasonable Effectiveness of Stochastic Gradient Descent (in 3 minutes)
Просмотров 59 тыс.2 года назад
Visual and intuitive Overview of stochastic gradient descent in 3 minutes. References: - The third explanation is from here: arxiv.org/abs/1802.06175 - Other references mentioned in the video: arxiv.org/abs/1509.01240, proceedings.mlr.press/v40/Ge15.pdf - AI plays hide and seek: openai.com/blog/emergent-tool-use/ - AI plays Dota 2: openai.com/five/ - InterFaceGAN: ruclips.net/video/uoftpl3Bj6w/...
Gradient Descent in 3 minutes
Просмотров 164 тыс.2 года назад
Visual and intuitive overview of the Gradient Descent algorithm. This simple algorithm is the backbone of most machine learning applications. References: - AI plays hide and seek: openai.com/blog/emergent-tool-use/ - AI plays Dota 2: openai.com/five/ - InterFaceGAN: ruclips.net/video/uoftpl3Bj6w/видео.html - Boyd and Vandenberghe's book on Convex Optimization (Sections 9.2 and 9.3): web.stanfor...
Principal Component Analysis (PCA)
Просмотров 190 тыс.2 года назад
This video is gentle and motivated introduction to Principal Component Analysis (PCA). We use PCA to analyze the 2021 World Happiness Report published 2021 and discover what makes countries truly happy. :) References: - Scikit-Learn User Guide : scikit-learn.org/stable/modules/decomposition.html#pca - A Tutorial on Principal Component Analysis: arxiv.org/abs/1404.1100 - Andrew Ng Stanford Cours...
Support Vector Machine (SVM) in 2 minutes
Просмотров 545 тыс.2 года назад
2-Minute crash course on Support Vector Machine, one of the simplest and most elegant classification methods in Machine Learning. Unlike neural networks, SVMs can work with very small datasets and are not prone to overfitting. This video would not have been possible without the help of Gökçe Dayanıklı.
Make Money Betting on Politics - Arbitrage with Predictit
Просмотров 8 тыс.2 года назад
Step-by-step tutorial to find and profit from arbitrage opportunities on Predictit. Predictit is market with unique attributes that makes it the perfect place for arbitrage: it is closed to big players like banks and hedge funds, and it lets you bet on political outcomes. - PredictIt arbitrage calculator: visuallyexplained.xyz/predictit-arbitrage-calculator/ - PredictIt: www.predictit.org/ 0:00...
The Karush-Kuhn-Tucker (KKT) Conditions and the Interior Point Method for Convex Optimization
Просмотров 111 тыс.2 года назад
A gentle and visual introduction to the topic of Convex Optimization (part 3/3). In this video, we continue the discussion on the principle of duality, which ultimately leads us to the "interior point method" in optimization. Along the way, we derive the celebrated Karush-Kuhn-Tucker (KKT) conditions. This is the third video of the series. Part 1: What is (Mathematical) Optimization? (ruclips.n...
Convexity and The Principle of Duality
Просмотров 70 тыс.2 года назад
Convexity and The Principle of Duality
What Is Mathematical Optimization?
Просмотров 113 тыс.2 года назад
What Is Mathematical Optimization?
How many times must you roll a die to get a six?
Просмотров 13 тыс.3 года назад
How many times must you roll a die to get a six?
Visually Explained: Kalman Filters
Просмотров 168 тыс.3 года назад
Visually Explained: Kalman Filters
Visually Explained: Newton's Method in Optimization
Просмотров 94 тыс.3 года назад
Visually Explained: Newton's Method in Optimization

Комментарии

  • @tseckwr3783
    @tseckwr3783 4 дня назад

    thank you.

  • @B_knows_A_R_D-xh5lo
    @B_knows_A_R_D-xh5lo 5 дней назад

    classics 0:07 0:08 0:08

  • @felipeazank3134
    @felipeazank3134 5 дней назад

    this kind of videos remind me of what internet is all about: sharing knowledge. Thanks for the content. I hope internet stopped here

  • @Ken08Odida
    @Ken08Odida 6 дней назад

    Thank you. Perfectly simplified in 2 minutes. Now I can build on this basic understanding

  • @harsh_hybrid_thenx
    @harsh_hybrid_thenx 7 дней назад

    At 17:34 you said, if g is positive and then log of negative quantity would be infinity. Is it correct? log of a negative quantity is not defined right.

  • @asmaaali8263
    @asmaaali8263 8 дней назад

    That was amazing, thanks 😊

  • @gustavgille9323
    @gustavgille9323 8 дней назад

    The least squares error example is beautiful!!!

  • @mirandac1364
    @mirandac1364 10 дней назад

    This is such a great video on so many levels. May god bless the people who had a hand in making it 🙏🏻🙏🏻

  • @evavashisth9103
    @evavashisth9103 11 дней назад

    Amazing explanation Thank you so much ☺️

  • @theProf-xc5pe
    @theProf-xc5pe 11 дней назад

    hmm close but no cigar

  • @user-yf5jz3zq5n
    @user-yf5jz3zq5n 13 дней назад

    Давно слежу за твоим каналом) рад что у тебя все збс и ты прогрессируешь)

  • @magalhaees
    @magalhaees 15 дней назад

    We center the data to have a mean of 0, which allows us to match the form of the covariance matrix provided in the video

  • @mehdirexon
    @mehdirexon 15 дней назад

    Nice video

  • @RAyLV17
    @RAyLV17 16 дней назад

    Man, I just checked that you haven't uploaded any new videos since 2 years! Hope you're doing well and come back with these amazing videos <3

  • @user-mm8wj5hb8y
    @user-mm8wj5hb8y 19 дней назад

    Да вроде кайфовый сайтик. Играл немного, но мне вполне понравилось)

  • @NarimanRava
    @NarimanRava 20 дней назад

    I watched you video, you video is so informative and humble �� thank you for share your video, I follow you

  • @weisanpang7173
    @weisanpang7173 21 день назад

    Is the answer to follow up question#1 = 4 ?

  • @andres_camarillo
    @andres_camarillo 23 дня назад

    Amazing video. Thanks!

  • @tejkiranv4056
    @tejkiranv4056 25 дней назад

    @VisuallyExplained Is the answer 2nd follow-up question (the median value) 2 throws? For example, take 100 throws, out of these, 16.6 throws would yield 6 in first throw. Around 41.6 throws would yield 6 in their second attempts. And since we want the 50th throw (or rather avg of 50th and 51st), it would be 2.

  • @iskhezia
    @iskhezia 27 дней назад

    I love it! Thanks for that. Can you share the code used for PCA in this video, please? I am trying repeat, but my results dont check with yours, I want to see where I'm going wrong (I didn't find it in the description on github). Thanks for the video.

  • @Leo-vv3jd
    @Leo-vv3jd 27 дней назад

    I really liked the video and the visuals, but I think it would be better without the "generic music" in the background.

    • @VisuallyExplained
      @VisuallyExplained 27 дней назад

      Thank you for taking the time to post your feedback, this is very useful for the growth of this channel!

  • @Arthur-uw1vm
    @Arthur-uw1vm 28 дней назад

    at 4:57, "the happiest country seems to be the most balanced ones", seems wrong, it should be "the most power ones" ?

  • @anikdas567
    @anikdas567 29 дней назад

    very nice animations, and well explained. But just to be a bit technical isn't what you described called "mini-batch gradient descent". Because for stochastic gradient descent don't we just use one training example per iteration?? 😅😅

  • @angelo6082
    @angelo6082 Месяц назад

    You saved me for my Data mining exam tomorrow 🙏

  • @chrischoir3594
    @chrischoir3594 Месяц назад

    rubbish video

  • @hantiop
    @hantiop Месяц назад

    Quick question: How do we choose the gamma parameter in the RBF kernel at 3:00? By, say, cross validation?

  • @manojcygnus9305
    @manojcygnus9305 Месяц назад

    Basically NN is f*@k around untill you find a best possibile value

  • @DG123z
    @DG123z Месяц назад

    It's like being less restrictive keeps you from optimizing the wrong thing and getting stuck in the wrong valley (or hill for evolution). Feels a lot like how i kept trying to optimize being a nice guy bc there was some positive responses and without some chaos i never would have seen another valley of being a bad boy which has much less cost and better results

  • @naveedanwer8262
    @naveedanwer8262 Месяц назад

    Just learn how to speak slwoly and you will have more views

  • @snowcamo
    @snowcamo Месяц назад

    Honestly didn't really help with my questions, but I didn't expect a 3 minute video to answer them. This was very well done, the visualization was great, and everything it touched on (while brief) was concise and accurate. Subbed. <3

  • @1matzeplayer1
    @1matzeplayer1 Месяц назад

    Great video!

  • @adnon2604
    @adnon2604 Месяц назад

    Amazing video! I could save a lot of time! Thank you very much.

  • @Christoo228
    @Christoo228 Месяц назад

    sagapo<3

  • @ashimov1970
    @ashimov1970 Месяц назад

    Brilliantly Genius!

  • @pnachtwey
    @pnachtwey Месяц назад

    how can v_k be used before it is calculated in the next line? How can one know the 'condition' if this is actual data and not a mathematical formula?

  • @mohammadzeinali5414
    @mohammadzeinali5414 Месяц назад

    Perfect thank you

  • @rand4492
    @rand4492 Месяц назад

    Perfect explanation thank you 🙏🏼

  • @_Lavanya-ju8yi
    @_Lavanya-ju8yi Месяц назад

    great explanation!

  • @larissacury7714
    @larissacury7714 Месяц назад

    That's great!

  • @duydangdroid
    @duydangdroid Месяц назад

    3:39 It's possible to get less than 1/2 Max-Cut. If all nodes are the same color, that's 0 cuts. We would have to shuffle the list of nodes and split them equally for assignment. Independent assignments will get you something like coin-flip results without a 1/2 lower bound.

  • @yashpermalla3494
    @yashpermalla3494 Месяц назад

    Isn’t the one who “goes first” the one on the inside?

  • @negarmahmoudi-wt5bg
    @negarmahmoudi-wt5bg Месяц назад

    Thank you for this clear explanation.

  • @AndresGarcia-pv5fe
    @AndresGarcia-pv5fe Месяц назад

    good but why the loud ass shopping music

  • @jameskirkham5019
    @jameskirkham5019 Месяц назад

    Amazing video thank you

  • @-T.K.-
    @-T.K.- Месяц назад

    Awesome video! This is very very helpful (as I'm going to take the convex optimization class exam tomorrow...) However, I am a bit confused at around 6:30. You mentioned that the minimizer x goes first and the maximizer u goes second in the expression at 6:45. I think in mathematics, the expression is evaluated inside-first? So in this case the inner part, maximizer u, would be the first player, and the minimizer x would be the second. I'm not sure if I understand this correctly...

  • @ZinzinsIA
    @ZinzinsIA Месяц назад

    Awesome content and video edition, thank you so much. Do you have any advice to produce such kind of graphics and animation ?

  • @johns4929
    @johns4929 Месяц назад

    Wow what an amazing video, i understood svm in 2 minutes, which I didn't watching other 15 minutes tutorial

  • @johngray6436
    @johngray6436 Месяц назад

    I've finally known where the hell Lagrangian comes from Such a great video

  • @sidhartsatapathy1863
    @sidhartsatapathy1863 Месяц назад

    sir do you use "MANIM" libray of python to create these beautiful animations in your great videos ?

  • @user-xe5ev3xv3b
    @user-xe5ev3xv3b Месяц назад

    loved it