General Questions

How do I request technical support?

Clicking the “Help” button at the top will open a service ticket. Please provide a description of your issue and support will be notified. 

How do I use my model to make predictions on new data?

After uploading your new dataset using the “Upload Data” button, click “Save data only” located on the data review window.

Then in the chat window type @eva apply model (model ID) to (data ID) which will command the chatbot to apply your model to your new data. Model and data IDs are accessed by clicking on the vertical ellipsis icon next to each dataset/model’s name and clicking on the “Copy ID” option.

How are models evaluated?

Twenty percent of your training data is used to validate models. That data is reserved and not accessible during model training. After models are trained, we apply each model to the reserved data to compute the metric (which you choose when creating the task). The computed metrics can be found on the review page.

My classification task has highly unbalanced data. Will it work? handles unbalanced data automatically. When reviewing the model metrics, we will show multiple metrics in addition to the one you selected. The common metrics such as accuracy may not work well for unbalanced data. It is recommended to focus on precision and recall.

About APIs

May I access my model through an API?

APIs can be enabled in the Task Review screen, to access that screen click on the model and from there you can activate the API of any model.

For more details, go to Accessing Models through API.

What is API latency?

API latency measures the time (in milliseconds) a model takes to process a single request. This latency must be added together with the latency from servers to your own server and other processing on your server when estimating the end-to-end latency.

As latency varies between requests, it is often reported as the latency at certain percentile. In the Task Review page, we report model latency at 95 percentile. If a model has an 95-percentile latency of 25ms, 95% of requests are processed within 25 milliseconds.

Why do different models have different latency?

Every model trained on the platform is customized for the training data users provide. The underlying algorithms that process the request differ significantly, which leads to the variation in model latency.

How do you decide which model to use?

As a rule of thumb, if the metric chosen has the word error or loss in its name, then find the model with the lowest number in that metric. If it doesn’t have error or loss in the name, then find the highest number in that metric. Latency can then be considered as an additional threshold depending on how much latency is acceptable for real-time applications.

Is my data safe?

Users always have complete control over your data. All user-uploaded data and derived models are encrypted before written to the cloud storage, and can be permanently removed at anytime through our UI. All the data and models are considered private, i.e. no other users can access any files in your account.

About Datasets

What is the data limit for uploads?

Trial users have a limit of 100MB for a single file.

What types of features do we support?

We support numeric, categorical, text and time-series data, or any combination of those types.

What type of data format do we support?

We currently support csv and tsv but will include more data formats soon.

How do I delete a dataset/project?

Any dataset or model can be deleted by clicking the “Delete” option from the drop-down menu next to each data/task. The drop-down menu is accessed by clicking the vertical ellipsis icon next to the dataset/model’s name.

About Models

My model is taking too long to train. How can I stop it?

Any incomplete model training can be stopped by clicking the “Cancel” option in the models drop down menu. This drop-down menu is found by clicking the vertical ellipsis icon next to the model’s name.

How can I monitor the model training progress?

Model training progress can be monitored from the Task Review screen, accessed by clicking on the task you want to monitor.

What if the chatbot Eva doesn’t seem to understand my question?

Try typing the command @eva help into the chat bot. This will give a list of available Eva functions to see if your command is supported and worded properly.  If that does not resolve the issue, please contact support using the “Need Help” button and explain the command you were trying to use in your issue description.

How do I know which metric is most suitable for my problem?

The right metric for your task depends on your application. For classification tasks, you can select accuracy or log-loss as your metrics. Explanations of classification metrics can be found here. For numeric prediction tasks (i.e. Regression) you can select between MSE, MAE, or MPAE. Explanations for regression metrics can be found here.

How does design models/algorithms?

Most models (including all the deep learning models) trained on the platform are custom-designed by our AI engine. This is what makes unique and enables the end-to-end automation.

Customers’ data are examined by a set of sophisticated proprietary algorithms to understand the underlying structure and data distributions. The extracted insight is then used to guide automated feature engineering and design customized deep learning models. After the models are trained, we evaluate every model using the test data (20% data reserved from the uploaded training data for this purpose). We then feed back the model performance metrics to the platform, which are used to further improve the model design skills. This way, our platform is able to design better and better models, after having trained on more and more datasets and tasks.

Create Your Free Account

Share this Post