Predictive Analytics Accuracy and Avoiding Overfitting

November 18, 2014

Overfitting is an important concept that every workforce analyst leveraging predictive techniques should understand, if he wants to make sure that his model is viable and useful over a range of scenarios. If everyone’s workforce was consistent and clone-like, this wouldn’t be an issue. But variability in the workforce is the norm, and a predictive model must account for that in order to produce the best results. Ignoring the possibility and impact of overfitting and failing to take the necessary steps during the initial development of the model can be disastrous. This could result in the use of a nonviable model on critical data in real time situations, generating results that are quite inaccurate. Since these results may form the basis of critical workforce decisions, such inaccuracies can have serious adverse consequences.

Overfitting occurs when the data set you are using fits the model too perfectly. As a result, an overfit model will not produce reliable results since variation in data will skew its ability to predict outcomes accurately. A properly fitted model accounts for variability and will give accurate results irrespective of which data set is being used as input.

When finely tuned models are NOT good news

Normally, if your model is fine tuned to the kind of data you want to analyze, you would think it was a good thing, right? However, when it comes to overfitting, it is this very same fine tuning that can spell disaster for a model’s accuracy and effectiveness. In an overfit scenario, the model is too closely aligned to a specific set of data, yielding perfect results and masking the fact that when the data is changed, anomalies would occur in the outcomes. This is a significant risk when the model is perfectly tuned to a specific data set because the risk that it will fail with other data sets is enhanced.

Use multiple datasets to mitigate overfitting risks

A fairly straightforward technique to eliminate the risk of having your analysis skewed by overfitting is to use multiple data sets at the training, validation and testing stages of a model. This helps the analysts determine early on whether the model is too closely tuned to a specific data set as it also shows the same model incapable of delivering reliable results with others.

Typically, analysts recommend using a 70-15-15 split for training, validation, and test data respectively through random sampling on the original dataset. The random sampling allows the analyst to effectively spot areas where the model may demonstrate dramatic skewing when varying data is used. The analysts use the data to train the model using a machine learning algorithm. The resultant validation is then used for the fine tuning of the algorithm, preventing it from creating a complex model that will work only with the data being used for training and testing. Before the testing concludes, the analyst also verifies beyond doubt that the model can be applied effectively on generalized data without compromising on the accuracy of the outcome.

Using the techniques outlined above will ensure your predictive analytics model will be fit for any data set, not just the trained set of data. With a properly fit model in hand, the reliability of your model to predict accurate results every time is significantly increased.

Read more from ZeroedIn

Get Started with ZeroedIn

Ready to unlock your full workforce potential? Speak with a product expert today and see how ZeroedIn can revolutionize your people analytics strategy.