<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.1.1">Jekyll</generator><link href="https://theguywithblacktie.github.io/kernel/feed.xml" rel="self" type="application/atom+xml" /><link href="https://theguywithblacktie.github.io/kernel/" rel="alternate" type="text/html" /><updated>2021-12-24T11:51:30-06:00</updated><id>https://theguywithblacktie.github.io/kernel/feed.xml</id><title type="html">Multiplying Matrices for a living</title><subtitle>NLP in day, CV by night.\ Documenting my learnings of Machine Learning.\ Motto: Be Limitless
</subtitle><entry><title type="html">What is BLEU score?</title><link href="https://theguywithblacktie.github.io/kernel/deep%20learning/2021/07/20/BLEU.html" rel="alternate" type="text/html" title="What is BLEU score?" /><published>2021-07-20T00:00:00-05:00</published><updated>2021-07-20T00:00:00-05:00</updated><id>https://theguywithblacktie.github.io/kernel/deep%20learning/2021/07/20/BLEU</id><author><name></name></author><category term="Deep Learning" /><summary type="html">BLEU, or Bilingual Evaluation Understudy, is a performance metric score for comparing the machine translations and human created reference translations for the same source sentence. This metric was introduced in Kishore Papineni, et al. in their 2002 paper “BLEU: a Method for Automatic Evaluation of Machine Translation”</summary></entry><entry><title type="html">Generative Adversarial Networks</title><link href="https://theguywithblacktie.github.io/kernel/deep%20learning/2021/06/30/GAN.html" rel="alternate" type="text/html" title="Generative Adversarial Networks" /><published>2021-06-30T00:00:00-05:00</published><updated>2021-06-30T00:00:00-05:00</updated><id>https://theguywithblacktie.github.io/kernel/deep%20learning/2021/06/30/GAN</id><author><name></name></author><category term="Deep Learning" /><summary type="html">We have come across various articles and posts about AI being capable of producing human like speech or generating images of non-existing people that are difficult to distinguish from real-life existence. These AI systems are built upon generative adversarial networks (GANs) - which Facebook AI research director Yann LeCun called “most interesting idea in last 10 years in ML” GANs were introduced in a paper by Ian Goodfellow and other researchers at the University of Montreal, including Yoshua Bengio, in 2014.</summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://theguywithblacktie.github.io/kernel/images/GAN_3.png" /><media:content medium="image" url="https://theguywithblacktie.github.io/kernel/images/GAN_3.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Understanding Support Vector Machine</title><link href="https://theguywithblacktie.github.io/kernel/machine%20learning/2021/06/06/SVM.html" rel="alternate" type="text/html" title="Understanding Support Vector Machine" /><published>2021-06-06T00:00:00-05:00</published><updated>2021-06-06T00:00:00-05:00</updated><id>https://theguywithblacktie.github.io/kernel/machine%20learning/2021/06/06/SVM</id><author><name></name></author><category term="Machine Learning" /><summary type="html">In Machine Learning, out of the many available supervised classification algorithms, Support Vector Machine (SVM) is one of the easiest algorithm and yet sometimes it becomes difficult to grasp due to its little nuances. In this blog, I try to lay down my notes on unboxing the SVM blackbox and make it easy to understand in detail. This blog focusses on entirety of SVM, from its introduction to classification and kernel tricks.</summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://theguywithblacktie.github.io/kernel/images/svm_multi_hyperplane.png" /><media:content medium="image" url="https://theguywithblacktie.github.io/kernel/images/svm_multi_hyperplane.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Understanding Cross-Entropy Loss and Focal Loss</title><link href="https://theguywithblacktie.github.io/kernel/machine%20learning/pytorch/2021/05/20/cross-entropy-loss.html" rel="alternate" type="text/html" title="Understanding Cross-Entropy Loss and Focal Loss" /><published>2021-05-20T00:00:00-05:00</published><updated>2021-05-20T00:00:00-05:00</updated><id>https://theguywithblacktie.github.io/kernel/machine%20learning/pytorch/2021/05/20/cross-entropy-loss</id><author><name></name></author><category term="Machine Learning" /><category term="PyTorch" /><summary type="html">[Updated on 03-12-2021: Fixed the Focal Loss Code]</summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://theguywithblacktie.github.io/kernel/images/focal_loss%20and%20CE%20loss.png" /><media:content medium="image" url="https://theguywithblacktie.github.io/kernel/images/focal_loss%20and%20CE%20loss.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Adding Variable Number of Layers in Neural Network</title><link href="https://theguywithblacktie.github.io/kernel/pytorch/2021/05/14/vary-layers-pytorch.html" rel="alternate" type="text/html" title="Adding Variable Number of Layers in Neural Network" /><published>2021-05-14T00:00:00-05:00</published><updated>2021-05-14T00:00:00-05:00</updated><id>https://theguywithblacktie.github.io/kernel/pytorch/2021/05/14/vary-layers-pytorch</id><author><name></name></author><category term="PyTorch" /><summary type="html">Consider following code block that defines a fixed 2-layer neural network. Imagine a scenario, where the network has huge number of layers, and typing out each layer manually is just not feasible. An even more notable scenario is when the number of layers of network are not fixed and it depends on some other conigurations. This article deals with these scenarios and lays out solution.</summary></entry></feed>