Skip to content
Snippets Groups Projects
Commit 7985e4c2 authored by Brummans, Nick's avatar Brummans, Nick
Browse files

added documentation for agent setup

parent 88c7c3f8
No related branches found
No related tags found
No related merge requests found
...@@ -95,20 +95,15 @@ You can deploy agents on your varying compute resources as long as it has access ...@@ -95,20 +95,15 @@ You can deploy agents on your varying compute resources as long as it has access
- export S3_SECRET_KEY=s3 secret key - export S3_SECRET_KEY=s3 secret key
6. deploy agent and let it listen to a queue (for example: test). The following code also creates that queue and service mode makes sure that the agent can run multiple different jobs (needed for pipelines). See [here](https://clear.ml/docs/latest/docs/clearml_agent/#deployment) for more options. We also detach to let the agent run in the background. 6. deploy agent and let it listen to a queue (for example: test). The following code also creates that queue and service mode makes sure that the agent can run multiple different jobs (needed for pipelines). See [here](https://clear.ml/docs/latest/docs/clearml_agent/#deployment) for more options. We also detach to let the agent run in the background.
- clearml-agent daemon --queue test --create-queue --service-mode --detach - clearml-agent daemon --queue test --create-queue --service-mode --detach
- For specific GPU units add --gpus flag (example: --gpus 0) - To deploy the ClearML with a GPU allocated add the --gpus argument
- Install CUDA https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
- install pytorch with cuda support https://pytorch.org/get-started/locally with Cuda support
- use cuda as device in script
7. [optional] We can also lock our terminal in Linux so nobody can read the environment variables. 7. [optional] We can also lock our terminal in Linux so nobody can read the environment variables.
- sudo screen - sudo screen
- Press ctrl+a+x and input password - Press ctrl+a+x and input password
- To reopen terminal, input password - To reopen terminal, input password
To deploy the ClearML with a GPU allocated add the --gpus argument
- Install CUDA https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
- install pytorch with cuda support https://pytorch.org/get-started/locally with Cuda support
- use cuda as device in script
- clearml-agent daemon --queue default --gpus 0
## Example task run ## Example task run
The task run can be used for running a single script. It runs locally by default, but if the **remote_queue** argument is input, the first epoch will run locally, but following epochs will run by the agent listening to the remote queue. The task run can be used for running a single script. It runs locally by default, but if the **remote_queue** argument is input, the first epoch will run locally, but following epochs will run by the agent listening to the remote queue.
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment