User Using TensorFlow on GPU Nodes

From T2B Wiki
Jump to: navigation, search

A simple example The example given below assumes that you are connected to a GPU node (see this page on the GPUs).

First, you need to declare the GPUs in your environment :

$ source /swmgrs/icecubes/set_gpus.sh

To make sure that it worked, try something like this :

$ echo $CUDA_VISIBLE_DEVICES
0,1

In this case, we have 2 GPU devices at our disposal.

To easily get a ready-to-use software environment for TensorFlow with GPU support, let's make use of a Singularity container. First, download the container image :

$ singularity pull --name osgvo-tensorflow-gpu.simg shub://opensciencegrid/osgvo-tensorflow-gpu

(This image has been tested and it has proven to work fine on our GPU nodes, but if you want to try something else, feel free to browse the ​Singularity Hub.)

Now, you need to create a home directory that will be mounted to the container, so that you can easily share files between the container and the GPU node :

$ mkdir /tmp/my_homedir

This directory will be your home directory in the Singularity container.

Now, let's write this a simple Python script named /tmp/my_homedir/test_matmul_gpu.py with the following content :

import tensorflow as tf
    
matrix1 = tf.constant([[3., 3.]])
matrix2 = tf.constant([[2.],[2.]])
product = tf.matmul(matrix1, matrix2)

with tf.Session() as sess:
  with tf.device("/gpu:0"):
    result = sess.run(product)
    print(result)

Again, it's important to insist on that : the script must be in the previously created home directory, otherwise it won't be visible from inside the container !

This script just defines two matrices and multiplies them making use of the first GPU, and then it prints the result. Now, let's run it in a container :

$ singularity exec --nv --home /tmp/my_homedir ./osgvo-tensorflow-gpu.simg python test_matmul_gpu.py
...
[[12.]]
Personal tools
Namespaces

Variants
Actions
Navigation
Tools