### Using the GCP VMs effectively (and cheaply)

#### Linux CLI quick guide - every line written down

All the instructions you should need are given below - unlike the regular Google documentation that seems to have a huge branching factor. I wrote this because I normally find that other blog posts also don’t really address how I usually set up a machine (and I don’t think I’m that strange…)

That being said, the following were useful resources for putting this together:

• https://blog.kovalevskyi.com/gce-deeplearning-images-m4-new-pytorch-with-cuda-9-2-1913dd37a7c2

 NB: There are now new image(s) out that may solve the Jupyter issues, but the underlying TF version, etc, don’t make it compelling for me to upgrade (yet).

### Installing the Google Cloud tools

#### Installing the python-based tools in a virtualenv

Actually, this doesn’t work for gcloud, which appears to install gsutils itself, so the previous instructions (which worked if you just need to manage Google’s Storage Buckets) don’t really apply here.

So : Ignore the virtualenv idea for ‘containing’ the Google polution on your local machine : You have to install packages globally (If not, please let me know in the comments below : I don’t like polluting my machine with company-specific nonsense).

#### Installing the GCloud tool itself ‘globally’

We need to install gcloud (which wasn’t needed for just the bucket operations, but it’s required for “compute engine” creation).

As root, update the yum/dnf repos (my local machine is Fedora, so your process may vary on this one part) to include the Google Cloud SDK repo (NB: The indentation for the 2nd line of gpgkey is important) :

Check that the installation has worked (the commands should show some status-like information) :

This will ask you to authenticate against your Google Cloud account (and save the token, and other settings, in ~/.boto):

This will have you going to a web authentication page to get the required code (which needs you to identify which Google account is linked to your cloud stuff).

Then to find the project you want to associate the VMs with, either execute :

Or go to the project link suggested to get the list of project ids, and select the one required :

### Choose parameters for a base GPU-enabled VM image

We do this first with a low-cost GPU so that we have a VM image with the Nvidia drivers installed (as well as other software that we want in all our subsequent VMs) as cheaply as possible. This disk can then be cloned, and started with a better GPU (and ~30 second creation delay).

#### Choose VM base image

Since my ideal machine has both current TensorFlow and PyTorch installed, it’s best to start with the tensorflow-gpu Google image since the TensorFlow has been specially compiled, etc (in a way that is super-difficult to do yourself), whereas the PyTorch install is easier to DIY later.

• Actual image name : tf-latest-cu92 (We’ll add PyTorch to this VM later)

#### Choose VM cores and memory size

• Choose a 13Gb RAM, 2-core machine (doesn’t need to be powerful, but prefer RAM to cores) :
• Regular is : n1-standard-8 8 30GB $0.3800$0.0800
• 2-core is : n1-standard-2 2 7.5GB $0.0950$0.0200
• This choice is n1-highmem-2 2 13GB $0.1184$0.0250

#### Choose GPU (initial, and final) - implies region/zone too

• Initially, just use a K80 :
• Starter : --accelerator='type=nvidia-tesla-k80,count=1'
• Realistic : --accelerator='type=nvidia-tesla-p100,count=1'
• Possible : --accelerator='type=nvidia-tesla-v100,count=8'
• Regions with K80s :
• That have been allocated quota… :
• asia-east1 ; asia-northeast1 ; asia-southeast1
• us-central1 ; us-east1 ; us-west1
• europe-west1 ; europe-west3 ; europe-west4
• But actually existing “GPUs for compute workloads” :
• us-central1-{a,c} ; us-east1-{c,d} ; us-west1-{b}
• europe-west1-{b,d}
• asia-east1-{a,b}
• Regions with P100s :
• That have been allocated quota… :
• us-central1 ; us-east1
• But actually existing “GPUs for compute workloads” :
• us-central1-{c,f} ; us-west1-{a,b} ; us-east1-{b,c}
• europe-west1-{b,d} ; europe-west4-{a}
• asia-east1-{a,c}
• Regions with V100s :
• That have been allocated quota… :
• us-central1 ; us-east1
• But actually existing “GPUs for compute workloads” :
• us-central1-{a,f} ; us-west1-{a,b}
• europe-west4-{a}
• asia-east1-{c}

So, despite me having got quota in a variety of places, it really comes down to having to choose us-central1 (say), without really loving the decision… And since we care about P100s (and K80s) : choose zone ‘c’.

#### Choose VM persistent disk size

• 30 GB of Standard persistent disk storage per month.

But disk size must be as big as the image (sigh) :

- Invalid value for field 'resource.disks[0].initializeParams.diskSizeGb': '10'. Requested disk size cannot be smaller than the image size (30 GB)

This means that the initial base VM persistent disk has to be 30GB in size, and we can’t have it and a ready-for-use one just sitting around idle for free.

### Actually set up the base VM

This includes all the necessary steps (now the choices have been justified).

This will (probably) request that you approve gcloud access from the current machine to your Google Cloud account :

#### Choose the project id

(Assuming you’ve already set up a project via the Google Cloud ‘Console’ GUI) :

#### Actually build the VM

This takes < 1 minute : The main Nvidia install happens during the reboot(s) :

Once that works, you’ll get a status report message :

And the machine should be running in your Google Cloud ‘Console’ (actually the web-based GUI).

#### Look around inside the VM

””” This VM requires Nvidia drivers to function correctly. Installation takes 3 to 5 minutes and will automatically reboot the machine. “””

• Actually took ~110secs (< 2 mins) :
• Reboots machine
• Re-SSH in
• Installation process continues…

Finally, run nvidia-smi to get the following GPU card ‘validation’ :

#### Install additional useful stuff for the base image

##### First steps

Of course, you’re free to make other choices, but my instinct is to set up a Python3 user virtualenv in ~/env3/ :

##### Install PyTorch

Here we install PyTorch (0.4.1) into the VM (TensorFlow is already baked in) :

##### Install gcsfuse

The following (taken from the gcsfuse repository) are required to install the ‘gcsfuse’ utility, which is needed to mount storage buckets as file systems :

##### Set up JupyterLab with the user virtualenv

We can see that jupyter is running on the machine already, using python3 :

So that we can use JupyterLab from within our own (user) virtualenv, we need to set it up (assuming ~/env3/ is the virtualenv as above). Once logged into the cloud machine as the cloud user, enable the virtualenv to find its python, to copy into the ‘root’ jupyterlab :

For ease-of-use of the JupyterLab system, it’s helpful to be able to navigate to your user code quickly. So, as your cloud user, go to where you want your workspace to live, and create a link to it that will be visible within the main JupyterLab opening directory :

(This only has to be done once - it persists).

##### Installation Summary

About half of the 30Gb is now used :

#### Check that TensorFlow and PyTorch are operational

##### TensorFlow

Here’s a ~minimal tensorflow-gpu check :

##### PyTorch

Here’s a ~minimal pytorch-gpu check :

### Now clone the base image

We’re currently in a no-charge state (assuming this is your only persistent disk on GCP, since you have a free 30Gb quota as a base). But, since we’d like to create new ‘vanilla’ machines from this one, we have to go into the (low) charge-zone.

#### Clone persistent disk from the current boot image

This makes a new ‘family’ so that you can specify the VM+Drivers+Extras easily.

#### (Finally) create a ‘better GPU’ machine using that boot image

This machine is running (and costing, for the preemtible P100 version ~ \$0.50 an hour). So you can ssh into it :

#### Finally : Ensure the VM is not running

Now, we’ve accomplished a few things :

• We have a clean VM with NVidia installed, ready to be cloned
• A cloned persistent disk that can be used ‘live’ - but that we’re also not too attached to

Thus, we can stop where we are, and think about actually using the machine(s) :

This gets us out of the ‘running a P100 for no reason’ state.

#### Summary

So : Now have two TERMINATED VMs with persistent disks :

• one with the BASE image (can be updated)
• the other as a preemptible nice-GPU one

Let’s wait a while, and see to what extent the preemptible machine disappears by 1:10am tomorrow…

… waited for 24hrs …

… And indeed, the persistent disk does not die, even though a machine based on it would TERMINATE after 24hrs. That means we can safely store data on the remaining ~14Gb of persistent disk available to us.

### Using your shiny new cloud Machine(s)

#### SSH into an image

It’s probably a good idea to run training (etc) within a screen or a tmux, so that if your network disconnects, the machine will keep going.

#### Run Jupyter locally

You can get access to is via a http://localhost:8880/ browser connection by setting up a proxy to the cloud machine’s 8080 (localhost:8880 was chosen to avoid conflict with ‘true local’ jupyter sessions) :

#### Run Tensorboard locally

You can get access to is via a http://localhost:6606/ browser connection by setting up a proxy to the cloud machine’s 6006 (localhost:6606 was chosen to avoid conflict with ‘true local’ tensorboard sessions) :

(make sure you’re pointing to a log directory that has files in it…)

See the official documentation. Also note that it’s definitely easier to get this right if you also have an open ssh session :

#### Mount Bucket Storage as a Drive

See the official documentation. :

Expected output :

Unmount the bucket :

#### Stop the VM

I’ve included this one twice, since it’s important not to let these things sit idle…

All Done!