FEATURED ARTICLE

Machine Learning

Machine Learning seems to be the latest ‘craze’ following the same path as ‘fads’ such as TQM, Kaizen, JIT, 6 Sigma, Analytics and recently Artificial Intelligence, Augmented Predictive Analytics. All have had a positive influence but at the same time, all have not attained the hyped-up benefits. Many applications have failed, and many have succeeded.

Machine-Learning is a step in the right direction. It is a natural evolutionary progression of the use of computer power. Unfortunately, it has been overhyped as all fads are to attain acceptance by the market.

So, what is Machine Learning (ML)?

There are many definitions on the internet. Some say that ML is an application of Artificial Intelligence to provide the ability to automatically learn without the need for programming. Some say that ML can learn for itself.

There is an implication that ML is of the same and even higher calibre as human learning, especially when Machine-Learning is associated with Neural Networks.

Irrespective what the hype says, the essence is that machine learning is about the use of programmed algorithms to learn from the data to then make decisions, or future predictions. ML is only as good as the algorithms.

The reality is that no computer, or machine will ever be able to truly learn like a human being. A computer is not self-aware, nor is it aware of its changing environment. Self-awareness affects what we are willing to retain for the learning process. Our ‘gut-feel’, is used to decide what we wish to incorporate. If something does not feel right, we are more likely to discard information to learn from. We were given instincts to cope in an uncertain fuzzy environment. Although in an academic environment this could be frowned upon, successful businessmen and military commanders tend to unanimously believe in the instinctual learning and decision-making process.

A human being though awareness will consider the everchanging environment and modify the learning experience accordingly. A computer is not alive and can only ‘learn’ according to the programmed instruction set and feedback.

All a computer can do is mimic intelligent human learning through code. Artificial intelligence is not real intelligence. It is mimicking intelligence. It mimics the intelligence, and decision making of the experts. Mimicking intelligence (needed for learning) is becoming more and more sophisticated but missing is awareness and considering all the other learning experiences. If anything differs from the assumptions used to develop code that mimics learning, the machine learning process fails. A computer cannot adapt outside the pre-programmed instructions no matter how real the mimicking appears. Learning is only as good as the programmer and scientists who use their knowledge to mimic the learning process.

In summary human learning is through awareness of experiences and feelings. Machine learning is through instructions programmed by the programmer mimicking decision processes provided by the programmer.

Technology limitations

Representation algorithms include regression analysis, logistic regression, decision theory, neural networks. Evaluation includes technology such as least squares, likelihood and maximum likelihood. Optimization includes search algorithms, linear programming, dynamic programming and quadratic programming, amongst many others.

Most of these technologies have been around for over one hundred years. Even Neural Networks, which mimic animal nervous systems, was first proposed in the 1940’s. The logistics function was invented in the 19th century, i.e. over one hundred years ago.

There are many issues with most of the technologies and countless of papers have been written about these issues. It is therefore foolish to read more into the power of machine learning then there is. Machine learning is as good as the underlying algorithms used, many of which have flaws. Furthermore, the technologies/algorithms available are not all applicable to every problem. Machine learning depends on matching the best technologies to the application. This depends on human judgement which is programmed into the system.

Machine Learning Applications

Some machine learning applications are in Pattern recognition, Image Recognition, Speech Recognition, Statistical Arbitrage, Prediction, Medical Diagnosis. All these work on the basis that there is some predictability and historical stability. Recognizing that an image is a house only works because there is similarity in all houses. Prediction only works if there is stability in patterns over time. It is not possible to predict into the future if past patterns do not persist. Sometimes instability is not intuitively evident. An example is household water consumption which intuitively may seem highly pattern repetitive, but in practice is not. Seasonal effects vary, occupancy can vary for the same household, illness, stress effecting toilet use can change consumption, changes in garden layout can change consumption.

Arguably one of the most prolific applications is in predictive analytics for business. Predictive must be understood. No prediction is reliable. Predictions depend on the data, the appropriateness of the algorithms, external factors not part of the data. The hype overlooks the fact that once the business reacts to the predictions the system is no longer the same system on which the predictions were based.

Conclusion

Machine Learning is sometimes misunderstood and overhyped. The computer does not learn consciously and never will. Programmers have simply written a series of steps in the form of algorithms that the processor must follow to mimic human learning processes. The effectiveness is dependent on the skills of the programmers and experts. The algorithms cannot cope with situations other than those that were considered by the programmers and experts. ML makes heavy use of old technologies which are known to have flaws

Irrespective, ML is here to stay and should be here to stay. It may not be perfect, and it may not have the learning abilities as a human being, but they are faster and can process data faster than most humans can. They will provide better insights, albeit not perfect. What is important is to see through the hype and instead evaluate each application on merit. A feasibility study needs to be performed before spending large amount of money and resources on an application to ensure that the expected results will be achieved.