In 1959 Arthur Samuel, an American pioneer in the field of computer gaming gave the idea of Machine Learning. He defined machine learning as — “Field of study that gives computers the capability to learn without being explicitly programmed”.
In 1997, Tom Mitchell gave a mathematical and relational definition that “A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E”
In Layman’s terms, Consider you are trying to toss a paper into a dustbin. In first…
The budget range of 50,000 to 60,000 is good for buying a best performing branded laptop loaded with several impressive features. Some of the popular brands like Dell, HP, Lenovo, Asus and several others sell latest laptops in this range.
Despite being the smallest AI and DL supercomputer in the world, Performance wise Nvidia Jetson Xavier NX is top and energy-efficient Single Board Computers among others. Nvidia Jetson Xavier NX would clock an operational computation of 21 Tera Operations Per Second (TOPS) on a 15W power supply. And at just 10W, the Xavier NX clocks around 14 TOPS. Its ultra-small 70–45mm footprint takes up little space but packs in a 6-core NVIDIA Carmel ARM v8.2 64-bit CPU with a 384-core NVIDIA Volta GPU and 48 Tensor cores.
The Jetson Nano is designed for AI enthusiasts, hobbyists and developers who want to do projects by implementing AI. The NVIDIA Jetson Nano delivers computing performance to run multiple neural networks alongside other applications such as object detection, segmentation, speech processing, image classification etc. and process data from several high-resolution sensors.
Though its core is older and weaker than some of the SCB’s, Jetson Nano has a much more capable GPU and performance designed specifically for AI applications.
Jetson Nano is also the perfect tool to start learning about AI and robotics. It opens the world of embedded IoT applications…
AUR (Arch User Repository) is a community-driven repository for Arch Linux users, containing PKGBUILDs (package descriptions) that allow you to compile a package from source with makepkg and then install it via pacman.
Ensure the base-devel package group is installed. And also git should be installed to download packages. python-pip required to install setuptools
sudo pacman -S base-devel git python-pip
Install setuptools using pip.
pip install setuptools
git clone https://aur.archlinux.org/aurman.git
PKGBUILD can be built into installable packages using makepkg, then installed using…
Regularization is an important concept to avoid overfitting of the training data especially when the trained and tested data are much varying.
Regularization is calculated by adding a “penalty” term to the RSS to achieve a lesser variance with the tested data.
RSS modified by adding that sum of squares of the coefficients of B.
Suppose following equation is the Regression model.
How do I calculate accuracy for my regression model?
This is a common question by beginners when they make a regression predictive modeling project. But the fact is accuracy is a measure for classification, not regression. We cannot calculate accuracy for a regression model. The performance of a regression model measures by error in predictions.
For example if you are predicting value of house, you don’t want to know if the model is predicting the exact value or not. Now how you will know how close the predictions were to the expected values.
There are three error metrics that are…
In the previous article, I wrote about Linear Regression, optimization of error by taking such coefficient, Gradient Descend Method, Overdetermined System of the equation, etc. In this article, I am writing about Polynomial Regression and other things written in the title.
This is an image of Linear Regression
Polynomial regression is a form of regression in which the relationship between the independent variable and the dependent variable is an nᵗʰ degree polynomial function of x.
Suppose equation of an machine learning model is,
Where B0,B1…… are parameters and 1,x1,x2…… are features, and the curve is of n dimensions.
For example, suppose following table is training datasets of a machine learning model, where x0,x1,x2,x3 are features and y is result.
Gradient Descent is an optimizing algorithm used in Machine/ Deep Learning algorithms, to minimize the objective convex function f(x) using iteration. It find the global minimum of the objective function.
Before starting this you have to know following concept of mathematics.