Definitions of masd-enchine learning are bound to be controversial. From a scientific perspective, machine learning is the study of learning mechanisms — mechanisms for using past experience to make future decisions. From an engineering perspective, machine learning is the study of algorithms for automatically constructing computer software from training data.
Machine learning has become the dominant approach to most of the classical problems of artificial intelligence (AI). It now dominates the fields of computer vision, speech recognition, natural language question answering, computer dialogue systems, and robotic control. It has also achieved a prominent role in other areas of computer science such as information retrieval, database consistency, and spam detection.
One can argue that machine learning is revolutionizing our understanding of the process of constructing computer software. Machine learning has become a new foundation for much of the practice of computer science. And nowadays scientists now consider a variety of problems in the machine learning area.
Learning to Debug Programs
Machine learning is making inroads into other areas of computer science: systems, networking, software engineering, databases, architecture, graphics, etc. One area that seems ripe for progress is automated debugging. Debugging is extremely time-consuming, but today we have the Internet and huge repositories of open-source code. Moreover, the leverage mass collaboration is possible.
So nowadays, every time a programmer fixes a bug, there is potentially a piece of training data. If programmers let to automatically record their edits, debugging traces, compiler messages, then soon there will be a large corpus of bugs and bug fixes. Of course, learning to transform buggy problems into bug-free ones is a very difficult problem, but it’s also highly structured and noise-free. So the program of this kind can become a “killer app” for the machine learning.
Deep Combination of Learning and Inference
The inference is crucial in structured learning. But research on the two has been largely separate to date. Now a lot of central processing power is losing because we are capable only for an approximate inference over all the date. And scientists are working at this problem.
They design machine learning to learn the most powerful models they can. So they come to the efficient inference and ideally in real time.
Learning “in the large”
Machine learning is most likely to pay off in future, already having such a “good enough” set of features. So far, the scientists have worked on micro-problems only and are likely to shift increasingly to macro-problems.
Learning “in the large” may include:
- learning in rich domains with many interrelated concepts;
- learning with a lot of knowledge, a lot of data, or both;
- taking large systems and replacing the traditional pipeline architecture with joint inference and learning;
- learning models with trillions of parameters instead of millions;
- continuous, open-ended learning;
- and others.
The theory and practice of machine learning are still in undergoing rapid evolution.
Most likely the state with these and other problems in machine learning will improve over time.
- AI changes everything, even business models. Know how — How does Artificial Intelligence change business models?
- Every business needs branding. Travel business especially. Read why — Why Branding Is Important For The Travel Businesses
- Robots change the world. Look how AI changes the design branch — How AI has started to impact the work of designers