What is an evaluation?
How Can We Help?
< All Topics

What is an evaluation?

An evaluation is an AI Studio resource that provides an easy way to measure and compare the performance of your classification and regression models (i.e., models, ensembles, and logistic regressions created using supervised learning algorithms). The goal of evaluations is obtaining an estimation of the model’s performance in production (i.e., making predictions for new instances the model has never seen before), as well as providing a framework to compare models built using different configurations or different algorithms to help identify the models with best predictive performance.

To evaluate the performance of your model, you need to use some test data different from the one used to train your model. Then AI Studio creates a prediction for every instance and compares the actual objective field values of the instances in the test data against the prediction results. You can check how good your model is using the performance measures that are based on the correct results as well as the errors made by the model. Watch this video for more details:

faq

Previous FAQ How can I interpret the evaluation results for my regression model?
Next FAQ Which evaluation metric should I trust the most?
type your search
Get in touch with us.
Our team is here to help you!

CONTACT INFO

For general inquiries:
hypersense@subex.com

For Media Relations:
sandeep.banga@subex.com

For Investor Relations: investorrelations@subex.com

For Careers:
jobs@subex.com
scroll-up

Before you go, can you please answer a question for us?