Improving Machine Learning Methods for Solving Non-Stationary Conditions Based on Data Availability, Time Urgency, and Types of Change

2020-01-22T19:35:07Z (GMT) by Chun Fan Goh
Supervised learning algorithms take as input a training dataset and produce a model to predict unseen data. The algorithms work well when the deployment condition is similar to the training condition in which the training data were generated. A non-stationary condition occurs when the deployment condition differs from the training condition. The change often results in a drop in performance. Non-stationary conditions are frequently encountered in machine learning applications. For instance, the issue arises when we try to train a snore detector, or vocal emotion recognizer, that trains on a fixed group of subjects but is tested
on a different group of subjects. Another example is using a learning-based controller to shoot water at a distant target under changing wind conditions. To compensate for a shift in condition, techniques such as importance weighted
learning (IWL) and forgetting are used. However, these techniques are not adequate.IWL can only handle covariate shifts but not concept shifts. It is also not designed for online prediction and, thus, fails to address frequent shifts in
condition. While forgetting can be used to address concept shifts, it is wasteful in discarding previously learned models.
To address these shortfalls, this thesis proposes looking into the three stages of supervised learning: before, during, and after the learning process. With this new perspective, we broaden our choice of strategy and have devised pre-learning, in-learning, and post-learning shift compensation methods. These new methods not only improve the performance in combating non-stationary conditions but
also handle more difficult concept-shift problems and situations that require a timely response. Under the proposed unified view, IWL is grouped as an in-learning method, which modifies the learning process to adapt to a condition change. In-learning methods are applicable when a limited amount of test data is available. For example, IWL uses the test data to implicitly select training data that match the test condition during learning. We also develope an alternative to IWL that uses the concept of transfer learning. It uses test data to further train the prediction model pre-trained on general training data to better adapt to the test condition. We showed the effectiveness of the method by applying it to a vocal emotion recognizer. By using test data with an amount equivalent to half of the training data, we boosted the accuracy by 10 percent. In applications that require a timely response, such as inverse kinematics modeling and vocal emotion recognition for human-robot interaction, post-learning methods that modify prediction dynamically are suitable. Based on this concept,
we have developed a local learning technique that handles multiple covariate shifts for inverse kinematics prediction. It also improves the prediction accuracy in vocal emotion recognition. In one instance, the results improved from 88.8% to 93.2% when we switched from IWL to the local learning method. Local learning also allows the use of feature augmentation to convert a more difficult conceptshift
problem into an easier covariate-shift problem for our application in water shooting control. When data are abundant, we can leverage pre-learning methods such as
condition-specific learning, to avoid non-stationary conditions altogether. This technique helped us in developing a semi-automatic snore labeling software that
produces good accuracy (0.93 F1-score) and cuts labeling time from hours to minutes. Besides looking at data, we can also use deep learning methods to learn features that are robust to change. In our ablation study, we showed that features extracted from very deep networks and recurrent networks results in more accurate and robust snore classification. Finally, with the advance of computer simulation, unlimited artificial data can be generated to better approximate and cover possible test conditions. We tested this idea in teaching a double-hull welding
robot to climb down safely from a high wall through reinforcement learning and achieved a 90% success rate.
In conclusion, by looking at a broader picture of supervised learning, we extend our tools of combating non-stationary conditions from in-learning methods to post-learning and pre-learning methods. We proved the usefulness of this new
perspective by applying the in-learning, post-learning, and pre-learning concepts to snore detection, vocal emotion recognition, water shooting control, and controlling
a double-hull welding robot climbing down from a tall wall. They produce promising results. From these applications, we also distilled a method selection guideline using the three-stage taxonomy, where the selection is based on data
availability, time urgency, and type of shift.