Machine Learning

Introduction:

  • Machine Learning (ML) is gaining traction, with more people realizing that it can help with a wide range of essential applications, including data mining, natural language processing, picture recognition, and expert systems. ML has the capacity to solve problems in all of these areas and more, and it is poised to become a cornerstone of our future civilisation.
    This need has yet to be met by the supply of capable ML designers. One of the main reasons for this is that machine learning is inherently difficult. This Machine Learning lesson covers the fundamentals of machine learning theory, setting out the common themes and concepts in a way that makes it simple to follow the reasoning and become familiar with machine learning fundamentals.*

What is Machine

  • Learning and How Does It Work?
    o What is “machine learning” in the first place? ML encompasses a wide range of concepts. The discipline is large and fast developing, with sub-specialties and kinds of machine learning constantly being partitioned and sub-partitioned ad nauseam.
    However, there are certain fundamental common threads, and the overarching idea is best summarized by Arthur Samuel’s oft-quoted comment from 1959: “[Machine Learning is the] branch of research that allows computers to learn without being explicitly taught.”
    Tom Mitchell, more subsequently, in 1997, provided a “well-posed” definition that has proven more relevant to engineers:“If a computer program’s performance on T, as measured by P, increases with experience E, it is said to learn from experience E with regard to some task T and some performance measure P.”

How it works?

Machine Learning is a sub-discipline of artificial intelligence that refers to the capacity of computer systems to find solutions to problems on their own by identifying patterns in databases. To put it another way, Machine Learning allows IT systems to detect patterns using current algorithms and data sets and create appropriate solution concepts. As a result, artificial knowledge is created based on experience in Machine Learning.

It is important for individuals to take action before the software can develop solutions on its own. The needed methods and data, for example, must be loaded into the systems ahead of time, as must the relevant analysis rules for recognizing patterns in the data pool. Following these two stages, the system may use Machine Learning to execute the following tasks:

  • Finding, extracting and summarizing relevant data.
  • Making predictions based on the analysis data.
  • Calculating probabilities for specific results.
  • Adapting to certain developments autonomously.
  • Optimizing processes based on recognized patterns.

Limitations:

Machine learning has proven transformational in several sectors, yet it frequently fails to produce the desired outcomes. There are a variety of reasons for this, including a lack of (appropriate) data, data access issues, data bias, privacy issues, poorly designed tasks and algorithms, incorrect tools and personnel, a lack of resources, and assessment issues.
In 2018, an Uber self-driving car failed to identify a person, and the pedestrian was killed as a result of the accident.
Even after years of effort and billions of dollars, IBM Watson’s attempts to apply machine learning in healthcare failed to deliver.

Hardware and software:

Hardware:

More effective techniques for training deep neural networks (a small subdomain of machine learning) that incorporate several layers of non-linear hidden units have been developed during the 2010s, thanks to advancements in both machine learning algorithms and computer technology. By 2019, GPUs have supplanted CPUs as the most common way of training large-scale commercial cloud AI, typically with AI-specific improvements. [109] From AlexNet (2012) to AlphaZero (2017), OpenAI calculated the amount of hardware compute required in the major deep learning projects and discovered a 300,000-fold increase in the amount of compute required, with a doubling-time trendline of 3.4 months.

Software:

Machine Learning has risen to prominence as the most important technological advancement of the twenty-first century. We’ll look at some of the most common software options for constructing your own machine learning model, as there are so many popular algorithms that can be utilized for designing machine learning solutions.

TensorFlow

TensorFlow is the common term for Machine Learning in the Data Science field. Its broad interface of CUDA GPUs makes it easy to develop both statistics and deep learning Machine Learning solutions. TensorFlow’s most fundamental data type is a tensor, which is a multi-dimensional array.
It is an open-source toolkit that may be used to create machine learning pipelines and scalable data processing systems. It supports and executes a variety of machine learning applications, including computer vision, natural language processing, and reinforcement learning. For novices, TensorFlow is one of the most important Machine Learning tools.

Shogun

Shogun is an open-source machine learning program that is widely used. It’s written in C++ as well. Python, R, Scala, C#, Ruby, and more languages are supported. Shogun supports the following algorithms:

  • Support Vector Machines
  • Dimensionality Reduction
  • Clustering Algorithms
  • Hidden Markov Models
  • Linear Discriminant Analysis

Apache Mahout

Apache Mahout is a machine learning framework that focuses on collaborative filtering and classification. The Apache Hadoop Platform is extended by these implementations. While it is still under development, the number of algorithms that it supports has been steadily increasing. The Map/Reduce concepts are used since it is built on top of Hadoop. The following are some of Apache Mahout’s unique features:

  • It provides expressive Scala DSL and a distributed linear algebra framework for deep learning computations
  • It provides native solvers for CPUs, GPUs as well as CUDA accelerators.

Apache Spark MLlib

Spark is a strong data streaming platform that also includes MLlib, which provides numerous advanced machine learning features. With its various APIs, it provides a scalable machine learning platform that allows customers to apply machine learning to real-time data.
With MLlib, you can quickly connect any Hadoop source and add machine learning techniques to make it function effortlessly. You may use Spark to do iterative computing in order to improve the outcomes of your algorithms. The following are some of the algorithms that MLlib supports.

Model assessments:

Accuracy estimation approaches like the holdout method, which separates the data into a training and test set and assesses the performance of the training model on the test set, may be used to validate the classification of machine learning models. The K-fold-cross-validation technique, on the other hand, randomly splits the data into K subsets and then runs K experiments, each using 1 subset for assessment and the remaining K-1 subsets for training the model. Bootstrap, which samples n instances with replacement from the dataset, can be used to test model accuracy in addition to the holdout and cross-validation approaches.

Investigators commonly report sensitivity and specificity, or True Positive Rate (TPR) and True Negative Rate (TNR), in addition to total accuracy. Investigators also report the false positive rate (FPR) as well as the false negative rate (FNR) on occasion (FNR). These rates, on the other hand, are ratios with hidden numerators and denominators. The total operating characteristic (TOC) is a useful way to represent the diagnostic capabilities of a model. The numerators and denominators of the previously stated rates are shown in TOC, which offers more information than the frequently used receiver operating characteristic (ROC) and the ROC’s related area under the curve (AUC).

Advantages of Machine Learning:

Machine Learning unquestionably aids people in becoming more creative and productive at work. Essentially, you may use Machine Learning to outsource fairly difficult or boring tasks to the computer, ranging from scanning, storing, and filing paper documents like bills to organizing and editing photos.

Self-learning robots may do complicated tasks in addition to these relatively simple activities. For example, recognizing mistake patterns is one of them. This is a significant benefit, particularly in industries that rely on continuous and error-free output, such as manufacturing. While even specialists cannot always predict where and how a production fault in a plant fleet occurs, Machine Learning allows for early detection of the error, which saves time and money.Damian Heimel, Deevio’s co-founder and COO, detailed how his company’s machine learning software is utilized in the foundry sector in an interview. Machine learning is a common approach for automating end-of-line quality control since the components made here are frequently subject to stringent safety standards. Defects in cast components range from fractures to blowholes, and the inspection procedure is prone to human error when producing thousands of pieces each day. Machine learning software may establish an uniform quality inspection procedure, but a human inspector’s eyes grow weary with time. Learn more about how Deevio’s machine learning system in foundries handles end-of-line quality control.

Self-learning programs are now also used in the medical field. In the future, after “consuming” huge amounts of data (medical publications, studies, etc.), apps will be able to warn a in case his doctor wants to prescribe a drug that he cannot tolerate. This “knowledge” also means that the app can propose alternative options which for example also take into account the genetic requirements of the respective patient.

Types of Machine Learning:

In general, algorithms play a crucial part in Machine Learning: on the one hand, they recognize patterns, and on the other, they may produce solutions. Algorithms can be classified into several types:

Observed learning: In supervised learning, example models are established ahead of time. These must then be described in order to guarantee an acceptable allocation of the information to the various model groups of the algorithms. To put it another way, the system learns from the input and output pairs it is given. A programmer, who functions as a type of teacher, supplies the right values for a specific input during supervised learning. The goal is to train the system through a series of computations with various inputs and outputs, as well as to build connections.

Unsupervised learning is when artificial intelligence learns without the use of predetermined goal values or rewards. It’s mostly used to learn segmentation (clustering). The computer tries to organize and categorize the data entered based on specific criteria. For example, a machine might (very easily) learn that coins of various colors can be sorted according to the characteristic “color” to organize them.

Partially supervised learning is a hybrid of supervised and unsupervised learning.

Reinforcing learning is based on incentives and punishments, exactly as Skinner’s classic conditioning method. A positive or negative interaction instructs the algorithm on how to respond to a specific circumstance.

Active learning: An algorithm is given the ability to query outcomes for specific input data based on pre-defined queries that are considered relevant in the context of active learning. In most cases, the system chooses questions that are very relevant on their own.

In general, depending on the system, the data foundation might be either offline or online. Furthermore, it may be made available only once or several times for Machine Learning. Another differentiating characteristic is the staggered development or simultaneous existence of the input and output pairs. A difference is established between so-called sequential learning and so-called batch learning based on this characteristic.

The Commercial Application of Machine Learning:

Economic data may be transformed into money with the aid of Machine Learning. Companies that use Machine Learning or Machine Learning methodologies are able to enhance customer satisfaction while also lowering costs. Customer desires and wants may be assessed using Machine Learning, and marketing strategies can be tailored accordingly. This improves the consumer experience while also increasing loyalty.

Furthermore, Machine Learning can assist businesses in determining whether or not there is a risk of client migration in the near future. This can be accomplished, for example, by evaluating support requests automatically. Another option is to look at the qualities that consumers who have already moved have in common. The firm obtains a list of the customer group at risk of migration if current customers with these qualities are filtered out based on the characteristics derived from the analysis. After that, appropriate steps may be made to keep these clients.

Furthermore, chat bots are increasingly being employed in the domain of telephone customer support. These are customer-communication programs that are automated. In this manner, chat bots can improve their cognitive abilities when it comes to interpreting tone in various scenarios. Furthermore, the chat bots have the ability to route the call to a call center staff if the request is more complicated.

Machine Learning is also an important technique in the development of autonomous systems: Machine Learning is also utilized in collaborative robots, in addition to autonomous automobiles. Machine Learning might also be used in the following areas:

  • Analysis of the stock market
  • Credit Card Fraud Detection
  • Automated diagnostic procedures
  • Acquisition of landmines in acoustic sensor and radar data

Training models:

In order to function successfully, machine learning models often require a large amount of data. When training a machine learning model, a large, representative sample of data from a training set is usually required. A corpus of text, a collection of pictures, and data gathered from individual customers of a service are all examples of data from the training set. When training a machine learning model, keep an eye out for overfitting.

Tenets of Artificial intelligence and Machine Learning in Robotics

AI and machine learning are influencing four areas of robotic operations to make existing applications more efficient and lucrative. The following are some of the applications of AI in robotics:

  1. Vision — AI is helping robots detect items they’ve never seen before and recognize objects with far greater detail.
  2. Grasping — robots are also grasping items they’ve never seen before with AI and machine learning helping them determine the best position and orientation to grasp an object.
  3. Motion Control — machine learning helps robots with dynamic interaction and obstacle avoidance to maintain productivity.
  4. Data — AI and machine learning both help robots understand physical and logistical data patterns to be proactive and act accordingly.

AI and machine learning are still in their infancy in regards to robotic applications, but they’re already having an important impact.

Two Types of Industrial Robot Applications Using Artificial Intelligence and Machine Learning

Supply chain and logistics applications are seeing some of the first implementations of AI and machine learning in robotics.

A robotic arm, for example, is in charge of handling frozen cases of food that are covered with frost. The items’ shapes alter as a result of the frost. The robot is not just sometimes provided with various components, but it is constantly supplied with varied shaped pieces. Despite the differences in shape, AI assists the robot in detecting and grasping these things.
Picking and arranging over 90,000 distinct part kinds in a warehouse is another great example of machine learning. Without machine learning, automating such a large number of component kinds would be unprofitable, but today engineers can give robots photos of new parts on a regular basis, and the robot can effectively comprehend different part types.

Industrial robots will be transformed by artificial intelligence and machine learning. While these technologies are still in their infancy, they will continue to push the limits of industrial robotic automation in the next decades.
Sign up for our free archived webinar, “Applying Artificial Intelligence and Machine Learning in Robotics,” to learn more about this topic.

How AI and Machine Learning Can Improve Robotics:

AI-enabled industrial robots become more aware of people and their surroundings.
In the industrial sector, robots can help firms do more tasks with fewer errors. Of course, when introducing robots into the workplace, safety is paramount, which is why certain AI robotics firms are working on solutions that allow robots to comprehend their surroundings and react appropriately.
Veo Robots has developed an industrial robotics system that incorporates computer vision, artificial intelligence, and sensors. Unless people approach too close to the machines, they can work at full speed.

As a result, robots are no longer kept in cages, but human safety remains paramount. Veo Robotics’ technology allows a robot to dynamically estimate how far it has to stay away from a person in order to avoid colliding with them.
There are also AI-enabled autonomous mobile robots (AMRs) that can learn the layout of a warehouse and navigate safely past warehouse obstacles in real time. These vehicles deliver components and completed goods, freeing humans from a job that would ordinarily require thousands of steps each day.

Robots can learn from their mistakes and adapt thanks to machine learning.
People gain knowledge as a result of their experiences. Robotics applications may be able to do the same thing thanks to advances in technology such as machine learning. When that happens, they may not require ongoing, time-consuming human training. Instead, learning would occur as a result of continued use.

The Shadow Robot Company and our work with OpenAI, established by business tycoons Elon Musk and Sam Altman, provide an example of how you might train a robot using machine learning. When OpenAI researchers used our technology, they experimented with machine learning by developing DACTYL, a virtual robotic hand that learns via trial and error. In the natural environment, these human-like techniques were translated to the Shadow Dexterous Hand, allowing it to grip and handle items efficiently.This demonstrates the possibility and success of training agents in simulation without simulating precise situations so that the robot may learn through reinforcement and intuitively make better judgments.

The future of ML:

In current age of Artificial Intelligence, machine learning is a hot topic. Computer vision and Natural Language Processing (NLP) are breaking new ground in ways that no one could have imagined. We are increasingly seeing both of these in our daily lives, such as face recognition in smartphones, language translation software, and self-driving automobiles. What may appear to be science fiction is becoming a reality, and Artificial General Intelligence is just a matter of time until we achieve it.

In this post, I’ll go through Jeff Dean’s keynote on computer vision and language models, as well as how machine learning will grow in the future from the standpoint of model construction.

Today, the area of machine learning is rapidly expanding, particularly in the field of computer vision. In computer vision, the error rate in humans is currently only 3%. This indicates that computers are already better than humans at identifying and interpreting pictures. What an incredible achievement! Computers used to be large pieces of technology the size of a room; now, they can sense the world around us in ways we never imagined.

Application of computer vision

Jeff Dean describes diabetic retinopathy as a classic application for computer vision. Diabetic retinopathy is a diabetes condition that damages the eye. An thorough eye exam is now necessary to diagnose it. A machine learning model that employs computer vision to make a diagnosis will be immensely useful in third-world nations and remote communities where physicians are scarce.Computer vision, like all medical imaging areas, may serve as a second opinion for domain specialists, confirming the accuracy of their diagnoses. In general, the goal of computer vision in medicine is to duplicate the expertise of professionals and deploy it in areas where it is most needed.

The problem with ML today:

The Google Senior Fellow discussed atomic models in his speech, which are now used by Machine Learning engineers to accomplish a variety of unit tasks. He argues that these models are inefficient and computationally costly, and that achieving good outcomes in such jobs requires more work.
To explain, in today’s ML world, professionals identify an issue they want to address and work on locating the appropriate dataset to train the model and complete the assignment. Dean claims that by doing so, they essentially start from scratch: they randomize the model’s parameter values and then try to learn about all of the jobs from the dataset.

To further explain this point, he uses the following analogy: “It’s like when you want to learn something new, you forget all you’ve learned and go back to being an infant, and now you want to learn everything about this task.”
He compares this approach to people becoming newborns whenever they wish to learn something new by removing one brain and replacing it with another. This approach is not only computationally costly, but it also requires more work to obtain good results in certain activities. And Jeff Dean has a solution to offer.

The Holy Grail of ML:

Jeff believes that the future of machine learning rests in a large, multi-functional model that can perform a lot of things. This super model will do away with the necessity to develop individual models for distinct jobs, instead training this single big model with many areas of competence. Consider a computer vision model that can detect diabetic retinopathy, identify various dog types, recognize your face, and be utilized in both self-driving vehicles and drones.
He also stated that the model works by sparsely activating certain parts of the model that are needed. Most of the time, the model will be 99 percent idle, and you’ll just need to call on the correct piece of knowledge when it’s needed.

Challenges

Dean thinks this super model is a promising approach for machine learning, and the technical problems are fascinating. Building a model like this would raise a slew of fascinating computer systems and machine learning issues, including scalability and model structure.
The fundamental question is: How will the model learn how to route the various parts of the model in the most efficient way?

More advances in machine learning research and mathematics will be required to make a breakthrough like this.

ML project Planning:

It’s critical to sit down and thoroughly consider what you want your machine learning model to do before you start constructing it. Before you start writing code, make sure you understand the problem at hand, the dataset’s nature, the sort of model you’ll be building, and how the model will be trained, tested, and assessed.
This post will go through ten key questions to think about before creating a machine learning model.

What are the predictor variables?

We typically work with extremely basic datasets in academic training programs, and the problem to be answered is well stated. For example, a homework task may provide you a clean dataset and ask you to utilize a specific set of characteristics as predictor features and one feature as your target feature. The problem can even inform you what type of model you should develop. You don’t know what the predictor variables are in a real-world data science assignment.

To find out what the objective variable is, you may need to collaborate with a team of engineers (industrial dataset), physicians (healthcare dataset), or others, depending on the company you work for. For example, if an industrial system contains sensors that generate data in real time, you may not have technical understanding of the system in issue as a data scientist. As a result, you’ll need to collaborate with engineers and technicians to determine which characteristics are of interest, as well as the predictor factors and target variables. Hundreds or thousands of characteristics may be included in your dataset, depending on the kind and complexity of the challenge.

This article is made By Salmen Jarraya For Holberton School

--

--

--

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

An Introduction to AWS Comprehend

NLP: Text Data To Numbers

Faster Hyperparameter Tuning with Scikit-Learn’s New HalvingGridSearchCV

Convolutional Neural Network : Basic Concepts

Running K-Nearest Neighbors on Tensorflow Lite

Face Recognition

Exercise_2_Cats_vs_Dogs_using_augmentation_Question-FINAL

Implementing Image Processing Algorithms in FPGA Hardware

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
salmen jarraya

salmen jarraya

More from Medium

Machine Learning Series | part 1

Dog Breed ClassifierIntroduction

Analyzing Bias — Variance Tradeoff, but theoretically.

Getting Started with Neural Networks