Autoencoders in Keras and Deep Learning

We all are well aware of the Supervised Machine Learning algorithms where ML algorithm tries to understand the relationship between input features and labels from the training data and is expected to automatically generate a similar kind of relationship between test data features and set of possible output labels. Although we solve lots of problems with supervised learningā€¦ Read More »

Sentiment Classification with Deep Learning: RNN, LSTM, and CNN

Sentiment classification is a common task in Natural Language Processing(NLP). There are various ways to do sentiment classification in Machine Learning (ML). In this article, we talk about how to perform sentiment classification with Deep Learning (Artificial Neural Networks). In my previous two articles, We have already talked about how to perform sentiment analysis using different traditional machineā€¦ Read More »

Sentiment Analysis with Python: TFIDF features

In my previous article on ‘Sentiment Analysis with Python: Bag of Words‘, We compared the results of three traditional machine learning sentiment classification algorithms using bag-of-words features(from scratch). This is my second article on sentiment analysis in continuation of that and this time we are going to experiment with TFIDF features for the task of Sentiment Analysis onā€¦ Read More »

Sentiment Analysis with Python: Bag of Words

Sentiment Analysis Overview Sentiment Analysis(also known as opinion mining or emotion AI) is a common task in NLP (Natural Language Processing). It involves identifying or quantifying sentiments of a given sentence, paragraph, or document that is filled with textual data. Sentiment Analysis techniques are widely applied to customer feedback data (ie., reviews, survey responses, social media posts). Sentiment Analysis has provedā€¦ Read More »

Optimizers explained for training Neural Networks

Overview Training a Deep Learning model (or any machine learning model in fact) is all about bringing the model predictions (model output) close to the real output(Ground truth) for a given set of input-output pairs. Once the model’s results are close to the real results our job is done. To understand how close model predictions are with respectā€¦ Read More »

1D-CNN based Fully Convolutional Model for Handwriting Recognition

Handwriting Recognition also termed as HTR(Handwritten Text Recognition) is a machine learning method that aims at giving the machines an ability to read human handwriting from real-world documents(images). The traditional Optical Character Recognition systems(OCR systems) are trained to understand the variations and font-styles in the machine-printed text(from documents/images) and they work really well in practice(example-Tesseract). Handwriting Recognition onā€¦ Read More »

Optimizing TensorFlow models with Quantization Techniques

Deep Learning models are great at solving extremely complex tasks efficiently but this superpower comes at a cost. Due to a large number of parameters, these models are typically big in size(memory footprint) and also slow in the inference (during predictions). Slow and heavy models are not much appreciated when it comes to the deployment part. As weā€¦ Read More »

Sampling Techniques in Statistics for Machine Learning

Data is like a fuel to a Data Scientist. Any study or research work requires a good amount of quality data. The term ‘good amount of quality data’ changes with the kind of study one wants to do. Various sampling techniques are there to get you just that. As a researcher, you may want to study-different animals, changingā€¦ Read More »

How to deal with Imbalanced data in classification?

If you have some experience in solving the classification problems then you must have encountered imbalanced data several times. An imbalanced dataset is a dataset that has an imbalanced distribution of the examples of different classes. Consider a binary classification problem where you have two classes 1 and 0 and suppose more than 90% of your training examplesā€¦ Read More »

Bagging, Boosting, and Stacking in Machine Learning

Ensemble learning techniques are quite popular in machine learning. These techniques work by training multiple models and combining their results to get the best possible outcome. In this article, we will learn about three popular ensemble learning methods-bagging, boosting, and stacking. Each one of these methods has its own benefits and limitations, but in practice ensemble methods oftenā€¦ Read More »