Reports by Nishant Agarwal
In today’s fast growing world, companies are faced with tough competitive and its survival depend... more In today’s fast growing world, companies are faced with tough competitive and its survival depends on its long term planning. A firm is successful only if it invests wisely by taking informed decisions and earn profits. Capital budgeting decision are usually long term decisions, so a firm needs to be much more cautious while taking the final decision whether to go for a project or not. Here, we are going to discuss a case of hypothetical company in which we get to learn different aspects of Capital Budgeting Decisions.
This report provides complete details of the development of various mechanisms pertaining to the ... more This report provides complete details of the development of various mechanisms pertaining to the electrical design of the robot for the competition ABU Robocon 2016, Pune. The document provides full details of all the mechanisms, designs, proof of concept, algorithms and implementation.
The Literature Survey was done during a case study on "Child Labour in India" in Delhi. This pape... more The Literature Survey was done during a case study on "Child Labour in India" in Delhi. This paper presents an understanding of the definition, laws, causes and solutions to the problem of child labour in India with qualitative and quantitative data sourced from government, media and UN sources.
Other by Nishant Agarwal
Model building and creation of benchmark models is the first step towards model validation. In th... more Model building and creation of benchmark models is the first step towards model validation. In this section, we will learn various kinds of model building techniques starting with the simple Linear Regression and then dealing with more complex techniques like GLM (Generalised Linear Models) and GAM (Generalised Additive Models).
Poster on Quantum levitation using superconductors, presented at Open House IIT Delhi 2016, Delhi... more Poster on Quantum levitation using superconductors, presented at Open House IIT Delhi 2016, Delhi, India. This poster introduces you to basics of quantum levitation, its working and applications. This poster was made as a part of course curriculum.
Coursework by Nishant Agarwal
This paper describes the use of multi-variable linear regression algorithm for estimation of a da... more This paper describes the use of multi-variable linear regression algorithm for estimation of a data set. The output depends linearly on m number of inputs, then linear regression is defined as, h (θ)(x) = θ 0 + m i=1 θ i (x i), where h (θ)(x) is hypothised value for given input for a particular set of parameters θ For this assignment I have applied batch gradient descent method and have analysed the effect of various hyperparameters and optimizers on the accuracy of prediction.
There are many situations where we have binary outcomes (it rains in Delhi on a given day, or it ... more There are many situations where we have binary outcomes (it rains in Delhi on a given day, or it doesnt; a person has heart disease or not) and we have some input variables which may be discrete or continuous. This kind of problem is known as classification and this paper describes the use of logistic regression for binary classification of a data set. The hypothesis function for logistic regression model is h θ (x) = 1 1 + exp(−θ T x) where h θ (x) is hypothised value for given input x for a particular set of parameters θ. For this assignment I have applied gradient descent method and have analysed the effect of various hyperparameters and optimizers on the accuracy of prediction.
There are many situations where we need to separate data into clusters without any labels being p... more There are many situations where we need to separate data into clusters without any labels being provided. This is an example of Unsupervised learning. In this assignment we apply K-Means algorithm for unsupervised learning on the given dataset and analyse the effect of various parameters including number of clusters and initialization method on the accuracy of clustering.
There are many problems in the world where we have more than two outcomes (identifying a digit fr... more There are many problems in the world where we have more than two outcomes (identifying a digit from the MNIST database; etc.). When the goal is to distinguish between multiple classes, softmax regression model is useful. This model is a generalisation of logistic regression for multiclass classification. The hypothesis for this regression model is based on the softmax function, given by σ z (j) = exp(z j) K k=1 exp(z k) f or j = 1..K Here z is a K-dimensional vector squashed to K-dimensional vector σ(z) of real values in the range (0, 1) that add up to 1. For this assignment I have applied gradient descent method and have analysed the effect of various hyperparameters and optimizers on the accuracy of prediction.
In today's world neural networks have become synonymous with machine learning. Everywhere we see ... more In today's world neural networks have become synonymous with machine learning. Everywhere we see the use of artificial neural networks from face recognition to classifying hand written digits. In this paper I will discuss the implementation of Neural Network for classifying images of handwritten digits. The neural networks consists of a large number of neurons in multiple layers making predictions according to their activations. There are many types of activation functions used like sigmoid, tanh, linear etc. For updating the weights and biases, gradient descent algorithm is used.
Drafts by Nishant Agarwal
—History is witness to the stimuli ever increasing demand for augmenting wireless throughput has ... more —History is witness to the stimuli ever increasing demand for augmenting wireless throughput has provided for the development of cutting edge technologies in wireless communications. As the data transfer rate demands soar to 100 Mbps per user, Massive MIMO which is a multiplicity of physically small, individually controlled antennas performs aggressive multiplex-ing/demultiplexing for all active users, utilizing directly measured channel characteristics, has emerged as a viable solution. [1] This paper provides a brief survey of one of the proposed techniques for realizing such infrastructure combining large-dimensional analog pre/postprocessing with lowerdimensional digital processing and provides an analysis of the performance of massive MIMO systems w.r.t. Shannon Capacity and Bit Error Rate incorporating channel's dynamic variations. [2]
Thesis Chapters by Nishant Agarwal

While academicians, scholars and leaders round the globe envisage our
cities of tomorrow with mil... more While academicians, scholars and leaders round the globe envisage our
cities of tomorrow with millions of augmented reality users transmitting
and receiving holographic videos demanding wireless throughput up to 100 megabits per second per user, there prevails one immortal truth i.e. the amount of available electromagnetic spectrum in nature is constant. In order to achieve this ever-increasing total wireless throughput demand reliably, access points with multiple antennas can be employed. Hence, Massive MIMO emerges as a promising player in realizing these objectives. It is essentially based on Large Scale Antenna Systems with myriad physically small and individually controlled antennas processing data of multiple users by performing extensive multiplexing/demultiplexing operations. What gives Massive MIMO an edge over other technologies is that it utilizes directly measured channel characteristics for its operations and it is a scalable technology, thanks to the leverage Time Division Duplexing (TDD) provides. While the principles of operation of this technology are seemingly simple to comprehend, the technology is yet to be deployed in practice. This
work provides a brief survey of one of the techniques for realizing such infrastructure combining large-dimensional analog pre/post-processing with lower dimensional digital processing. We have also analyzed performance of this technology in terms of reliability and capacity vis-a-vis physical hardware constraints.
Conference Presentations by Nishant Agarwal
In this presentation I discuss about the emerging field of data science answering the fundamental... more In this presentation I discuss about the emerging field of data science answering the fundamental questions of what, why and how. This paper aims to provide a simple overview of data science field, various learning and career opportunities.
Uploads
Reports by Nishant Agarwal
Other by Nishant Agarwal
Coursework by Nishant Agarwal
Drafts by Nishant Agarwal
Thesis Chapters by Nishant Agarwal
cities of tomorrow with millions of augmented reality users transmitting
and receiving holographic videos demanding wireless throughput up to 100 megabits per second per user, there prevails one immortal truth i.e. the amount of available electromagnetic spectrum in nature is constant. In order to achieve this ever-increasing total wireless throughput demand reliably, access points with multiple antennas can be employed. Hence, Massive MIMO emerges as a promising player in realizing these objectives. It is essentially based on Large Scale Antenna Systems with myriad physically small and individually controlled antennas processing data of multiple users by performing extensive multiplexing/demultiplexing operations. What gives Massive MIMO an edge over other technologies is that it utilizes directly measured channel characteristics for its operations and it is a scalable technology, thanks to the leverage Time Division Duplexing (TDD) provides. While the principles of operation of this technology are seemingly simple to comprehend, the technology is yet to be deployed in practice. This
work provides a brief survey of one of the techniques for realizing such infrastructure combining large-dimensional analog pre/post-processing with lower dimensional digital processing. We have also analyzed performance of this technology in terms of reliability and capacity vis-a-vis physical hardware constraints.
Conference Presentations by Nishant Agarwal
cities of tomorrow with millions of augmented reality users transmitting
and receiving holographic videos demanding wireless throughput up to 100 megabits per second per user, there prevails one immortal truth i.e. the amount of available electromagnetic spectrum in nature is constant. In order to achieve this ever-increasing total wireless throughput demand reliably, access points with multiple antennas can be employed. Hence, Massive MIMO emerges as a promising player in realizing these objectives. It is essentially based on Large Scale Antenna Systems with myriad physically small and individually controlled antennas processing data of multiple users by performing extensive multiplexing/demultiplexing operations. What gives Massive MIMO an edge over other technologies is that it utilizes directly measured channel characteristics for its operations and it is a scalable technology, thanks to the leverage Time Division Duplexing (TDD) provides. While the principles of operation of this technology are seemingly simple to comprehend, the technology is yet to be deployed in practice. This
work provides a brief survey of one of the techniques for realizing such infrastructure combining large-dimensional analog pre/post-processing with lower dimensional digital processing. We have also analyzed performance of this technology in terms of reliability and capacity vis-a-vis physical hardware constraints.