If you run the DL Workbench in the Intel® DevCloud for the Edge, see Troubleshooting for DL Workbench in the DevCloud.
This error appears due to the incorrect permissions that are set for the configuration folder on a host machine with Linux* or macOS*.
The indicator of the problem is the following output in the terminal:
To resolve the problem, follow the steps below:
NOTE: If the configuration folder already exists, delete it before proceeding.
--assets-directoryargument in the script you used to install the application.
- If you use a non-default configuration directory, replace
- Creating the directory with the
-m 777mode makes the directory accessible to ALL users for reading, writing and executing.
This error appears due to model and dataset type incompatibility.
Also, check that you do not select a VOC Object-Detection dataset for a Classification model, or an ImageNet Classification dataset for an Object-Detection model.
If you cannot import models from the Open Model Zoo, you may need to specify your proxy settings when running a Docker container. To do that, refer to Advanced Configurations.
If you cannot download a model from the Open Model Zoo because its source is not available, you can select a different model of the same use case from another source. If you have a problem with connectivity, you may need to check the internet connection and specify your proxy settings.
The error shown below may appear due to incorrect user permissions set for an SSL key and/or SSL certificate.
Check the key and certificate permissions. They must have at least **4 mode, which means reading for
To resolve the problem, run the command below in your terminal and then restart the DL Workbench.
NOTE: The command makes the provided files accessible for reading to all users.
When the DL Workbench is unable to upgrade to the highest version, you can run the highest DL Workbench version without your data or use the previous DL Workbench version to keep your data in the tool. Choose the solution that suits you best:
rm -rf ~/.workbench/*
docker volume rm workbench_volumeand
docker volume create workbench_volume
mkdir -p -m 777 ~/.workbench_new
docker volume create workbench_volume_new
If the specified user has no sudo privileges on the remote machine, only a CPU device is available for inference. If you want to profile on GPU and MYRIAD devices, follow the steps described in the Configure Sudo Privileges without Password section of Set Up Remote Target.
If the automatic setup of GPU drivers fails, install dependencies on the remote target machine manually as described in the Install Dependencies on Remote Target Manually section of Set Up Remote Target.
Check the following parameters if you can not authenticate to the remote machine:
Check the user name for the SSH connection to the remote machine.
Make sure you upload the
id_rsa key generated when you set up the remote target.
You should upload the `id_rsa` key, which contains a set of symbols surrounded by the lines shown below:
Make sure you have Python* 3.6, 3.7, or 3.8 on your target machine. See Set Up Remote Target for dependencies instructions and the full list of remote target requirements.
Make sure you have pip* 18 on your target machine. See Set Up Remote Target for dependencies instructions and the full list of remote target requirements.
Make sure you have Ubuntu* 18.04 on your target machine. See Set Up Remote Target for the full list of remote target requirements.
This failure may occur due to incorrectly set or missing proxy settings. Set the proxies as described in Register Remote Target in the DL Workbench. To update remote machine information, see Profile with Remote Machine
To learn more about an error, download a
.txt file with server logs. Click the user icon in the upper-right corner to see the Settings, then click Download Log:
NOTE: Server logs contain sensitive information like data on your models. If you do not wish to share this information, attach only the
Use the logs to investigate problems and manually run tools to debug the problem by entering the Docker* container.
server_log.txtfile and find the latest line that contains
If the issue persists, post a question on Intel Community Forum and attach the server logs. If the issue is not reproduced in the container, feel free to post a question as well, but specify that you could not reproduce it in the container. For more information, go to the Enter Docker Container section of the Work with Docker Container page.