Hello Everyone, today we are going to take a look at one of the fastest growing domain in IT: Machine Learning. Machine learning is a sub field of artificial intelligence (AI). Machine Learning allows systems (computers) to apply statistical learning techniques to automatically identify patterns in data. These techniques can be used to make highly accurate predictions. The goal of machine learning generally is to understand the structure of data and fit that data into models that can be understood and utilized by people.
Arthur Samuel (1959) defined Machine Learning as a field of study that gives computers the ability to learn without being explicitly programmed.
So, What is Machine Learning ?
The above definition for Machine Learning is little technical to fully understand the concept, so in layman’s terms Machine Learning is a process through which the algorithm or program goes through your existing data, structure it and can make prediction with a very high accuracy about the future data, that will be added to your data. So this helps us to proactively make decisions about your system, application or logistics even cost all in advance. Machine Learning can predict a disaster in your environment or application well ahead of time, so that you can work with it to avoid the disaster all together. This is a great way to make sure that your application has no down time at all and also you can save a lot, in terms of logistics.
Machine learning is closely related to computational statistics, which also focuses on prediction-making through the use of computers. It has strong ties to mathematical optimization, which delivers methods, theory and application domains to the field. Machine learning is sometimes conflated with data mining where the latter sub field focuses more on exploratory data analysis and is known as unsupervised learning. Machine learning can also be unsupervised and be used to learn and establish baseline behavioral profiles for various entities and then used to find meaningful anomalies.
Evolution of Machine Learning
Machine Learning was born from pattern recognition and the theory that computers can learn without being programmed to perform specific tasks. The iterative aspect of machine learning is important because as models are exposed to new data, they are able to independently adapt. They learn from previous computations to produce reliable, repeatable decisions and results. While many machine learning algorithms have been around for a long time, the ability to automatically apply complex mathematical calculations to big data — over and over, faster and faster — is a recent development. Here are a few widely publicized examples of machine learning applications you may be familiar with:
- The heavily hyped, self-driving Google car? The essence of machine learning.
- Online recommendation offers such as those from Amazon and Netflix? Machine learning applications for everyday life.
- Knowing what customers are saying about you on Twitter? Machine learning combined with linguistic rule creation.
- Fraud detection? One of the more obvious, important uses in our world today.
So, How does it works ?
Machine Learning Methods
1. Supervised Learning
In supervised learning, the computer is provided with example inputs that are labeled with their desired outputs. The purpose of this method is for the algorithm to be able to “learn” by comparing its actual output with the “taught” outputs to find errors, and modify the model accordingly. Supervised learning therefore uses patterns to predict label values on additional unlabeled data.
2. Unsupervised Learning
In unsupervised learning, data is unlabeled, so the learning algorithm is left to find commonalities among its input data. As unlabeled data are more abundant than labeled data, machine learning methods that facilitate unsupervised learning are particularly valuable. The goal of unsupervised learning may be as straightforward as discovering hidden patterns within a data set, but it may also have a goal of feature learning, which allows the computational machine to automatically discover the representations that are needed to classify raw data. Unsupervised learning is commonly used for transactional data. You may have a large data set of customers and their purchases, but as a human you will likely not be able to make sense of what similar attributes can be drawn from customer profiles and their types of purchases.
3. Semi-supervised Learning
Semi-supervised learning is used for the same applications as supervised learning. But it uses both labeled and unlabeled data for training — typically a small amount of labeled data with a large amount of unlabeled data (because unlabeled data is less expensive and takes less effort to acquire). This type of learning can be used with methods such as classification, regression and prediction. Semi-supervised learning is useful when the cost associated with labeling is too high to allow for a fully labeled training process. Early examples of this include identifying a person’s face on a web cam.
4. Reinforcement learning
Reinforcement learning is often used for robotics, gaming and navigation. With reinforcement learning, the algorithm discovers through trial and error which actions yield the greatest rewards. This type of learning has three primary components: the agent (the learner or decision maker), the environment (everything the agent interacts with) and actions (what the agent can do). The objective is for the agent to choose actions that maximize the expected reward over a given amount of time. The agent will reach the goal much faster by following a good policy. So the goal in reinforcement learning is to learn the best policy.
Tools/Programming Language for Machine Learning
The most simple yet powerful tools for Machine Learning are as follows –
PredictionIO is an open source machine learning server for software developers to create predictive features, such as personalization, recommendation and content discovery. It supports event collection, deployment of algorithms, evaluation, querying predictive results via REST APIs. It is based on scalable open source services like Hadoop, HBase (and other DBs), Elasticsearch, Spark and implements what is called a Lambda Architecture. The company was founded in 2012 and is based in Walnut, California. As of February 29, 2016, it operates as a subsidiary of salesforce.com, inc.
PredictionIO consist of the following components:
PredictionIO platform — Open source machine learning stack for building, evaluating and deploying engines with machine learning algorithms.
Event Server — Open source machine learning analytics layer for unifying events from multiple platforms.
Template Gallery — The place to download engine templates for different type of machine learning applications.
The instructions to download and install along with prerequisites to run PredictionIO can be found on Prediction IO.
Torch is an open source machine learning library, a scientific computing framework, and a script language based on the Lua programming language. It provides a wide range of algorithms for deep machine learning, and uses the scripting language LuaJIT, and an underlying C implementation. Torch is a scientific computing framework with wide support for machine learning algorithms that puts GPUs first. It is easy to use and efficient, thanks to an easy and fast scripting language, LuaJIT, and an underlying C/CUDA implementation.
The goal of Torch is to have maximum flexibility and speed in building your scientific algorithms while making the process extremely simple. Torch comes with a large ecosystem of community-driven packages in machine learning, computer vision, signal processing, parallel processing, image, video, audio and networking among others, and builds on top of the Lua community.
DL-Learner is a framework for supervised Machine Learning in OWL, RDF and Description Logics. The DL-Learner software learns concepts in Description Logics (DLs) from examples. Equivalently, it can be used to learn classes in OWL ontologies from selected objects. It extends Inductive Logic Programming to Descriptions Logics and the Semantic Web. The goal of DL-Learner is to provide a DL/OWL based machine learning tool to solve supervised learning tasks and support knowledge engineers in constructing knowledge and learning about the data they created.
DL-Learner consists of core functionality, which provides Machine Learning algorithms for solving the learning problem, support for different knowledge base formats, an OWL library, and reasoner interfaces. There are several interfaces for accessing this functionality, a couple of tools which use the DL-Learner algorithms, and a set of convenience scripts. The general structure is illustrated in the following figure:
4. Nervana Neon
Intel® Nervana™ neon™ is a reference deep learning framework targeting ease of use, extensibility, and optimal performance on all hardware. neon supports many commonly used layers and offers a Model Zoo to help you accelerate development of your own models. This webinar will provide an introduction to using deep learning in data science workflows, as well as to using Intel® Nervana™ neon™, including demonstrations on network design, model training, and inference.
Click here for downloading, installing and configuring Neon.
These are the open source tools which are used to achieve Machine Learning. IT Giants have also pitched in to the Machine Learning domain. Many of the major players in IT have developed or acquired tools for achieving Machine Learning in almost every and any system or application across. Here are few of those –
IBM Watson Machine Learning is an IBM Cloud service that’s available in the Watson Data Platform integrated environment. Use IBM Watson™ Knowledge Studio to create a machine-learning model that understands the linguistic nuances, meaning, and relationships specific to your industry or to create a rule-based model that finds entities in documents based on rules that you define. Watson Knowledge Studio provides easy-to-use tools for annotating unstructured domain literature, and uses those annotations to create a custom machine-learning model that understands the language of the domain. The accuracy of the model improves through iterative testing, ultimately resulting in an algorithm that can learn from the patterns that it sees and recognize those patterns in large collections of new documents. You can deploy the finished machine-learning model to other Watson cloud-based offerings and cognitive solutions to find and extract mentions of relations and entities, including entity co references. The following diagram illustrates how it works.
You can get started with IBM Watson machine learning by clicking here, but to do so you need a IBMid, click here to create a new account on IBM Cloud to get your IBMid. Check out the pricing model for IBM Watson Machine Learning Platform.
Azure Machine Learning is an integrated, end-to-end data science and advanced analytics solution. It enables data scientists to prepare data, develop experiments, and deploy models at cloud scale.
The main components of Azure Machine Learning are:
Azure Machine Learning Workbench
Azure Machine Learning Experimentation Service
Azure Machine Learning Model Management Service
Microsoft Machine Learning Libraries for Apache Spark (MMLSpark Library)
Visual Studio Code Tools for AI
Check out the pricing model for Azure Machine Learning. You need to have an account on Azure. If you don’t have an account, click here to create your account, it’s free of cost.
Machine Learning @ Amazon Web Services
Amazon have been investing deeply in AI — Artificial Intelligence for over 20 years. Machine Learning Algorithms drive many of their internal systems.
Why machine learning on AWS?
— Machine Learning for everyone
Whether you are a data scientist, ML researcher, or developer, AWS offers machine learning services and tools tailored to meet your needs and level of expertise.
— API-driven ML services
Developers can easily add intelligence to any application with a diverse selection of pre-trained services that provide computer vision, speech, language analysis, and chat-bot functionality.
— Deep platform integration
ML services are deeply integrated with the rest of the platform including the data lake and database tools you need to run ML workloads. A data lake on AWS gives you access to the most complete platform for big data.
Control access to resources with granular permission policies. Storage and database services offer strong encryption to keep your data secure. Flexible key management options allow you to choose whether you or AWS will manage the encryption keys.
Consume services as you need them and only for the period you use them. AWS pricing has no upfront fees, termination penalties, or long term contracts. The AWS Free Tier helps you get started with AWS.
Services offered by Amazon Web Service in regard to Machine Learning —
- Amazon SageMaker
- Amazon Comprehend
- AWS Deep Learning AMIs
- Amazon Lex
- Amazon Polly
- Amazon Rekognition
- Amazon Machine Learning
- Apache MXNet on AWS
- Amazon Translate
- Amazon Transcribe
- AWS DeepLens
Future of Machine Learning
Machine learning is currently one of the hottest topics in IT. The reason stems from the seemingly unlimited use cases in which machine learning can play a role, from fraud detection to self-driving cars to identifying your “gold card” customers to price prediction. But where are we going with Machine Learning? Where will we be in next decade? Well to be honest, I don’t know, well nobody knows exactly, but there are some rumors going around the internet, I’ve put together some prediction which I personally think would come true very soon.
Better Unsupervised Algorithms
Unsupervised learning occurs when no labels are given to the learning algorithm. It is left on it’s own to find structure in the input data. Unsupervised learning can be a goal in itself, such as discovering hidden patterns in data, or a means towards an end, often called feature learning. It is likely that advances in building smarter, unsupervised learning algorithms will lead to faster and more accurate outcomes.
Collaborative learning is about utilizing different computational entities so that they collaborate in order to produce better learning results than they would have achieved on their own. An example of this would be utilizing the nodes of an IoT sensor network, or what is called edge analytics. With the growth of the IoT, it is likely that large numbers of separate entities will be utilized to learn collaboratively in many ways.
This technology includes kit like APIs and services through which developers can create more discover able and intelligent applications. Machine learning APIs will allow developers to introduce intelligent features such as emotion detection; speech, facial, and vision recognition; and language and speech understanding into their applications. The future of this field will be the introduction of deeply personalized computing experiences for all.
Well that’s it from today guys, hope you guys enjoyed or more importantly learned something new, I know I did. I will be back soon with something new, something exciting that is changing the IT world. Take care and see you soon :)