Search this site
Embedded Files
Skip to main content
Skip to navigation
www.enrichmentcap.com
Home
About
R&D
Neural Networks From Scratch
Convolutional Neural Networks From Scratch
Adam Optimizer with CNN's
ML Activation Functions From Scratch
Addressing Exploding Gradients in CNN's
Value of Random Numbers in ML
ML Batching from Scratch
ResNet with Backpropagation from Scratch
Capsule Networks Part 1
Capsule Networks Part 2 Dynamic Routing
Capsule Networks Part 3 Reconstruction Loss
Pretrained Object Detection
Object Detection from Scratch
FFT from Scratch
Investments
Real Estate
Securities Trading
Private Equity
Opportunities
Careers
Scholarships
Contact
www.enrichmentcap.com
Home
About
R&D
Neural Networks From Scratch
Convolutional Neural Networks From Scratch
Adam Optimizer with CNN's
ML Activation Functions From Scratch
Addressing Exploding Gradients in CNN's
Value of Random Numbers in ML
ML Batching from Scratch
ResNet with Backpropagation from Scratch
Capsule Networks Part 1
Capsule Networks Part 2 Dynamic Routing
Capsule Networks Part 3 Reconstruction Loss
Pretrained Object Detection
Object Detection from Scratch
FFT from Scratch
Investments
Real Estate
Securities Trading
Private Equity
Opportunities
Careers
Scholarships
Contact
More
Home
About
R&D
Neural Networks From Scratch
Convolutional Neural Networks From Scratch
Adam Optimizer with CNN's
ML Activation Functions From Scratch
Addressing Exploding Gradients in CNN's
Value of Random Numbers in ML
ML Batching from Scratch
ResNet with Backpropagation from Scratch
Capsule Networks Part 1
Capsule Networks Part 2 Dynamic Routing
Capsule Networks Part 3 Reconstruction Loss
Pretrained Object Detection
Object Detection from Scratch
FFT from Scratch
Investments
Real Estate
Securities Trading
Private Equity
Opportunities
Careers
Scholarships
Contact
R&D
www.enrichmentcap.com - Neural Networks From Scratch
May 8, 2023 Understand Neural Networks and the math behind Machine Learning (ML) and Artificial Intelligence (AI) simply using python (code posted below). You can download the mnist dataset ("mndata.npz") below if you'd like to make the code as simple as possible. For this, we only need the Numpy
www.enrichmentcap.com - Convolutional Neural Networks From Scratch
May 8, 2023 Understand Convolutional Neural Networks and the math behind Machine Learning (ML) and Artificial Intelligence (AI) simply using python (code posted below). For this, we only need the Numpy library from python! This is a continuation of the previous article: Neural Networks From
www.enrichmentcap.com - Adam Optimizer with CNN's
May 8, 2023 ML models can use a variety of optimizers, with Stochastic Gradient Descent (SGD) being the most intuitive. Instead of using SGD as an optimizer as in the previous articles, this article uses the popular adaptive momentum estimation ADAM optimizer. It incorporates the concept of
www.enrichmentcap.com - ML Activation Functions From Scratch
May 10, 2023 Activation functions can be a useful tool in machine learning. Here we are going to code activation functions and their derivatives from scratch; these include popular activation functions like RELU, sigmoid, and tanh. Full code and comments below. This is a continuation of the
www.enrichmentcap.com - Addressing Exploding Gradients in CNN's
May 8, 2023 When coding ML models from scratch, a major problem to deal with is Numpy runtime errors of invalid values and numerical overflows. Since ML uses exponential functions due to its differentiability, it is inevitable that sooner or later the numbers in the neural network we are training
www.enrichmentcap.com - Value of Random Numbers in ML
May 9, 2023 In this article, we are going to code the Glorot Initializer and the He Initializer to demonstrate how valuable random numbers can be in ML. In previous articles, we coded a neural network, a two-layer convolutional neural network, the adam optimizer, and gradient clipping all from
www.enrichmentcap.com - ML Batching from Scratch
May 9, 2023 For this article, we are going to batch from scratch. Previously, we created a 2-layer CNN, added activation functions, the adam optimizer, and initialized random values by addressing variance. Now we can put it all together to create a model that is usable. In the prior articles, we
www.enrichmentcap.com - ResNet with Backpropagation from Scratch
May 10, 2023 In this article, we are going to code a ResNet and its backpropagation from scratch only using Numpy in Python. These have become widely used in mainstream ML projects. Previously, we have discussed solutions to Exploding Gradients, However, Vanishing Gradients become an issue when
www.enrichmentcap.com - Capsule Networks Part 1
May 18, 2023 In 2017, a paper was published which soon had the highest performing ML model on the MNIST dataset, using relatively fewer trainable weights. When I came across the paper, I had started to suspect dense layers utilizing matrix multiplication lose valuable information in
www.enrichmentcap.com - Capsule Networks Part 2 Dynamic Routing
May 22, 2023 In the previous article, we filtered through the Capsule Net paper to understand what capsules are and how they work. Dynamic routing or routing-by-agreement is another technique used in the paper. The only difference between the code in this article and the previous article is the
www.enrichmentcap.com - Capsule Networks Part 3 Reconstruction Loss
May 23, 2023 The last major part of the Capsule Network Paper is its novel approach s to calculating loss. This article contains code that builds on the previous articles walking through the capsules and dynamic routing. This network needs to allow two inputs, trainable weights in the loss
www.enrichmentcap.com - Capsule Networks Part 3 Reconstruction Loss
May 23, 2023 The last major part of the Capsule Network Paper is its novel approach s to calculating loss. This article contains code that builds on the previous articles walking through the capsules and dynamic routing. This network needs to allow two inputs, trainable weights in the loss
Google Sites
Report abuse
Google Sites
Report abuse