Skip to main content

How to troubleshoot PhariaFinetuning

Restarting the Ray cluster

The Ray cluster might need to be restarted to recover from faults or to apply new infrastructure changes.

Caution: Restarting will terminate all ongoing jobs, and past logs will only be accessible via the Aim dashboard.

Steps to restart the cluster

  1. Go to the ArgoCD Dashboard

    • Locate the PhariaFinetuning application.
  2. Find the Kubernetes Resource for the Ray Cluster

    • Identify the head node pod, e.g.,
      pharia-learning-pharia-finetuning-head  
  3. Delete the Head Node Pod

    • Click on the three dots next to the pod name.
    • Select Delete.
    • The cluster will reboot in a few minutes.

restart-ray


Alternative: Restart using Ray CLI

Not tested on our side, but according to the Ray CLI documentation:

  • If there are no configuration changes, use:

    ray up
    • This stops and starts the head node first, then each worker node.
  • If configuration changes were introduced, ray up will update the cluster instead of restarting it.

  • See the available arguments to customize cluster updates and restarts in the Ray documentation.


Handling insufficient resources

When submitting a new job to the Ray cluster, it starts only when resources are available. If resources are occupied, the job will wait until they free up.

Possible reasons for delays in job execution:

  • Another job is still running.
  • The requested machine type is not yet available.

When a job fails due to insufficient resources

Ensure that the job’s resource requirements align with the available cluster resources:

Check if GPUs are available

  • If your job requires GPUs, ensure that there is a worker group configured with GPU workers.

Check worker limits

  • Ensure the number of workers requested does not exceed the maximum replicas allowed in the configuration.

Increase GPU memory for large models

  • If training a large model, increase the GPU memory limit for workers in the worker pool.

Handle out-of-memory (OOM) issues

  • If the head node runs out of memory, increase its memory limit to allow it to manage jobs effectively.