Azure Machine Learning Basics
In the round of sessions and announcements made at the Microsoft BUILD 2017 conference were some great things about Azure Machine Learning. You may have heard of these services or may not, either way let’s look at what they are how we can use them. The machine learning components within Azure are all wrapped into an offering referred to as Microsoft Cognitive Services . What are the Microsoft Cognitive Services?
June 12, 2017
In the round of sessions and announcements made at the Microsoft BUILD 2017 conference were some great things about Azure Machine Learning. You may have heard of these services or may not, either way let’s look at what they are how we can use them.
The machine learning components within Azure are all wrapped into an offering referred to as Microsoft Cognitive Services.
What are the Microsoft Cognitive Services?
Microsoft Cognitive Services are a set of APIs, SDKs and services available to developers to make their applications more intelligent, engaging and discoverable. Microsoft Cognitive Services expands on Microsoft’s evolving portfolio of machine learning APIs and enables developers to easily add intelligent features – such as emotion and video detection; facial, speech and vision recognition; and speech and language understanding – into their applications. Our vision is for more personal computing experiences and enhanced productivity aided by systems that increasingly can see, hear, speak, understand and even begin to reason.
What services are available today as part of Microsoft Cognitive Services?
The current service has more components that I and probably you would expect. Right now, the services are broken down into Vision, Speech, Language, Knowledge and Search. Each of these is then broken down into either single or multiple APIs.
Each currently available API can be accessed via the web with ease and called from your code.
Vision
State-of-the-art image processing algorithms help you, moderate content automatically and build more personalized apps by returning smart insights about faces, images, and emotions. Containing the following endpoints:
Computer Vision
Face
Content Moderator
Emotion
Video
Custom Vision
Video Indexer
Speech
Processing spoken language in your applications. Containing the following endpoints:
Custom Speech
Speaker Recognition
Translator
Bing Speech
Language
Allow your apps to process natural language, evaluate sentiment and topics, and learn how to recognize what users want. Containing the following endpoints:
Language Understanding
Web Language
Translator
Bing Spell
Text Analytics
Linguistic Analysis
Knowledge
Map complex information and data in order to solve tasks such as intelligent recommendations and semantic search. Containing the following endpoints:
Recommendations
Knowledge Exploration
Entity Linking
Academic
QnA Make
Custom Decision
Search
Make your apps, webpages, and other experiences smarter and more engaging with the Bing Search APIs. Containing the following endpoints:
Bing Autosuggest
Bing Image
Bing News
Bing Video
Bing Web
Bing Custom
There are a few new ones currently in preview that give you an idea of what to expect in the future.
Project Prague
Project Prague is a cutting-edge, easy-to-use SDK that creates more intuitive and natural experiences by allowing users to control and interact with technologies through hand gestures. Based on extensive research, it equips developers and UX designers with the ability to quickly design and implement customized hand gestures into their apps.
Nanjing Project
Isochrones describe the area that can be reached given a mode of transportation and optionally predicted traffic at a given time of day. Using isochrones, one can answer questions such as “where should I live if I don’t want to commute more than 45 minutes”. Project Nanjing provides time-specific isochrones for the most accurate search across time and space.
Project Johannesburg
Project Johannesburg previews a new truck-routing service for professional transportation. Long, wide or heavy vehicles or those transporting potentially dangerous goods may not be able to leverage the same routing service as standard vehicles. A bridge to pass under might be too low, one to cross might not support the weight, the turning radius might be too small or the incline too high. Project Johannesburg takes into consideration truck attributes such as specific speed limits, weight, length and height restrictions as well as rules for the transportation of hazardous materials.
Project Cuzco
This API lives on an engine listening to the world and collecting new signals that report events happening on the earth. The engine provides evidences with structural information from different sources, both positive and negative, biased and impartial. Picking all related signals together will help characterizing events happening in the future.
Project Abu Dhabi
Calculating travel time and distances in many-to-many scenarios is often addressed through distance matrix APIs. Project Abu Dhabi previews an advanced Distance Matrix algorithm that optionally includes a histogram of travel times based considering a time-windows and the predicted traffic at those times.
Project Wollongong
Does it matter to you how many cinemas, bars, theaters, parks or other kinds of places are nearby? If so, Project Wollongong is right for you. It allows you to provide a score for the attractiveness of a location based on the number of amenities, proximity to public transit stops or other criteria you set. You can search by time or distance and also take into consideration the predicted traffic at the time that matters most to you.
As you can see Microsoft has invested and continues to invest in these services. To test any of the services you can simply access the following URL and then click off to see them in action before you sign up in Azure to use them.
For example, clicking on the Text Analytics API, take you to a screen where you can test by typing in text which will then be checked for Sentiment, Key Phrases and Language.
Using my Bio that I use for conferences, I can simply paste it in and then click analyze.
Once it is completed it returns the JSON values in a structure for Language, Sentiment and Key Phrases.
Now when coding to them, the principle is the same, call the endpoint and then consume the JSON response as needed. In a previous post, I walked through the setup and then code options for this.
This is one of the most important technologies that are available today, so I encourage you to learn more here.
About the Author
You May Also Like