SignalAI: Augmenting Signal Processing with Machine Learning

Shifting gears from statistics and application of data for design to using sensor data for testing and operations powered by SignalAI.

It will not be a much of an exaggeration if we call the 21st century as the century of data. We are now producing more data in a year than we have generated from the beginning of human civilization to 2014 and building on this data a whole host of applications are developed which can almost correctly predict from what we are going to buy in the e-commerce platforms to what shows we might like to watch next in our streaming platforms.

Along similar lines a parallel revolution had started in the field of sensor and the sensor data analytics too. Not only numbers of sensors are increasing exponentially across multiple verticals but sensors are becoming cheaper, more versatile and accurate at the same time. Hence the application of sensor data or more precisely the analytics built on it, are entering new industries. Out of these domains predictive maintenance is one of the most promising ones where it is showing tremendous value addition both in terms of cost and time. Just to get a taste of it, predictive maintenance alone can save approximately 18% to 25% in maintenance expenditures along with additional savings and benefits through increased uptime [1]. Within the field of predictive maintenance, anomaly detection and fault type prediction has recently emerged as the two major domains where joint application of signal processing and machine learning has brought in new revolutionary solutions. Now to build these types of applications, it typically involves three steps namely: data gathering/generation, model building and last but not the least model deployment and visualization and this is where SignalAI comes into the picture which can help us with all of the three pieces.

Starting with data gathering/generation piece, there might be two possibilities here. First one where we are lucky to have enough sensor data already available with us either in the form of historical data stored in a local system or we might have access to real time sensor data. SignalAI can deal with both of the situations by reading in the files from a local directory or by reading in the real time data through a MQTT listener. On the contrary if we don’t have enough historical or real-time data to train our predictive models, SignalAI can be hooked up with different Altair simulation software to generate synthetic data to deal with this lack of data problem. Once we have gathered or generated enough data, SignalAI tackles the 2nd piece of model building by leveraging a combined application of signal processing and machine learning. Within modelling, for the data preprocessing part SignalAI is capable of extracting useful time or frequency domain features automatically from raw data depending upon the user’s choice and then builds state of the art anomaly detection models feeding on these extracted features. Currently SignalAI offers 3 types of anomaly detection models that a user can choose from and we are adding a 4th option where SignalAI will pick the best model automatically with just one click from the user. For the last piece of model deployment & visualization there are typically 3 types of platforms where users might want to deploy them namely desktop, cloud and edge and SignalAI’s models can be deployed in all of these 3 domains depending upon the user’s choice and preferences.

To demonstrate these all-round capabilities of SignalAI, we have built a demo using NASA’s bearing health monitoring challenge [2]. The data set consists of real time accelerometer data coming from four bearings which are connected to a shaft in a test rig of Rexnord Corp. in Milwaukee, WI and the challenge here is to analyze the accelerometer data so that we can monitor the health status of the bearings over their life span and try to predict when they are going to fail.

We used SignalAI in Altair Activate (Multi-disciplinary System Simulation Software) to read in the raw sensor data, extracted useful features automatically and built anomaly detection models feeding on these features. In terms of outcome, it generated a matrix called health indices to observe the health of the bearings over their life span and by analyzing the health indices also successfully identified the bearing failure events in advance and hence the maintenance crew can replace or repair them before they cause any significant losses. On the addition to this once the results are generated from the Activate’s SignalAI, they are passed on to Panopticon through a MQTT publisher and the users are able to visualize the health indices as well as the anomaly status in real time even remotely using panopticon dashboards.



Want to untap the potential of historical or real-time sensor data to detect and prevent anomalies? Want to identify fault types and their root causes by analyzing historical records augmented with synthetic data? Please feel free to get in touch.


References:

 [1]  https://www.mckinsey.com/business-functions/operations/our-insights/digitally-enabled-reliability-beyond-predictive-maintenance#

[2]  https://ti.arc.nasa.gov/tech/dash/groups/pcoe/prognostic-data-repository/#bearing