GPUs: Difference between revisions

From T2B Wiki
Jump to navigation Jump to search
(Created page with "left|25px|line=1| This section is experimental, feel free to improve and send us comments or questions ! <br> === Description of the hardware ==...")
 
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
[[File:Exclamation-mark.jpg|left|25px|line=1|]] This section is experimental, feel free to improve and send us comments or questions !
 
== About GPUs at T2B ==
 
We do not have any GPUs at T2B. But all members of a Belgian university have access to a GPU through their university cluster as discussed below.
 
 
=== If you belong to a flemish university (VUB, UAntwerpen, UGent) ===
----
 
<br>You have access to all VSC clusters, with quite some choice in GPUs.
Have a look at all VSC clusters available and their hardware on [https://docs.vscentrum.be/en/latest/hardware.html this page].<br>
-> For instance, the '''VUB Hydra''' cluster has 4 nodes with 2 x '''Nvidia Tesla P100''' cards and 8 nodes with 2 x '''Nvidia A100'''.
 
 
==== Getting  an account ====
You can easily get an account valid throughout all VSC clusters, just follow their documentation [https://docs.vscentrum.be/en/latest/index.html here].
 
 
==== Access GRID ressources ====
We have checked that at least on VUB Hydra, you have access to /cvmfs, and can only use /pnfs via [[GridStorageAccess|grid commands]] (so you can't do a '''ls /pnfs''').
To get an environment similar to what is on T2B cluster, just source the following:
source /cvmfs/grid.cern.ch/centos7-umd4-ui-211021/etc/profile.d/setup-c7-ui-python3-example.sh
 
==== Support and Feedback ====
As we do not manage any of the clusters, prefer contacting their support, adding us in CC if you want.
If your cluster does not have what you need (like /cvmfs, etc), feel free to inform us, we will try to discuss with the other admins how to make this possible. Please note that VSC has a strict process to new software, so it might take some time.<br>
Also, as this mixed usage of our resources and GPUs from other clusters is rather new, we would appreciate any feedback you might have !
 


<br>
<br>
=== Description of the hardware ===
* One node with 2 NVidia Tesla M2050 (2.6GB) GPU cards, 8 cores, 12GB of ram and 160 GB of /scratch <br>
* Two nodes with each 6 NVidia Tesla M2075 (5.3GB) GPU cards, 24 cores, 64 GB of ram and 820 GB of /scratch<br>
<br>


=== Queues ===
=== If you belong to a wallonian university (ULB, UCL) ===
We have 3 queues:
----
* '''gpu2''': has the 1 node with the 2 GPUs
 
* '''gpu6''': has the 2 nodes with the 6 GPUs each, so 12 GPUs in total
<br>You have access to all CECI clusters, with quite some choice in GPUs.
* '''gpu''': has all 3 nodes
Have a look at all CECI clusters available and their hardware on [https://www.ceci-hpc.be/clusters.html this page].<br>
-> For instance, the '''UMons Dragon2''' cluster has 2 nodes with 2 x '''Nvidia Tesla V100''' cards.
 


==== Getting  an account ====
You can easily get an account valid throughout all CECI clusters, just follow their documentation [https://login.ceci-hpc.be/init/ here].


=== Running Jobs using the GPUs ===
[[File:Exclamation-mark.jpg|left|25px|line=1|]] As we are no experts and don't use the GPUs ourselves, this is really work-in-progress. Please share with us and update this page if you have more in-depth information !


<br>
==== Access GRID ressources ====
Inside your jobs, you need to export the number of available GPUs to your environment.
At this time, there is no /cvmfs access, therefore no access to /pnfs resources.<br>
A small script has been written to do so, so just add it in your qsub scrip:
You would have to use rsync to transfer files to/from T2B if needed.
. /swmgrs/icecubes/set_gpus.sh
This should give you something like:
<pre>env|grep CUDA
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5</pre>


Then just send as usual your job using one of the GPU queues:
==== Support and Feedback ====
qsub -q gpu myscript.sh
As we do not manage any of the clusters, prefer contacting their support, adding us in CC if you want.
If your cluster does not have what you need (like /cvmfs, etc), feel free to inform us, we will try to discuss with the other admins how to make this possible.<br>
Also, as this mixed usage of our resources and GPUs from other clusters is rather new, we would appreciate any feedback you might have !

Latest revision as of 10:29, 26 January 2024

About GPUs at T2B

We do not have any GPUs at T2B. But all members of a Belgian university have access to a GPU through their university cluster as discussed below.


If you belong to a flemish university (VUB, UAntwerpen, UGent)



You have access to all VSC clusters, with quite some choice in GPUs. Have a look at all VSC clusters available and their hardware on this page.
-> For instance, the VUB Hydra cluster has 4 nodes with 2 x Nvidia Tesla P100 cards and 8 nodes with 2 x Nvidia A100.


Getting an account

You can easily get an account valid throughout all VSC clusters, just follow their documentation here.


Access GRID ressources

We have checked that at least on VUB Hydra, you have access to /cvmfs, and can only use /pnfs via grid commands (so you can't do a ls /pnfs). To get an environment similar to what is on T2B cluster, just source the following:

source /cvmfs/grid.cern.ch/centos7-umd4-ui-211021/etc/profile.d/setup-c7-ui-python3-example.sh

Support and Feedback

As we do not manage any of the clusters, prefer contacting their support, adding us in CC if you want. If your cluster does not have what you need (like /cvmfs, etc), feel free to inform us, we will try to discuss with the other admins how to make this possible. Please note that VSC has a strict process to new software, so it might take some time.
Also, as this mixed usage of our resources and GPUs from other clusters is rather new, we would appreciate any feedback you might have !



If you belong to a wallonian university (ULB, UCL)



You have access to all CECI clusters, with quite some choice in GPUs. Have a look at all CECI clusters available and their hardware on this page.
-> For instance, the UMons Dragon2 cluster has 2 nodes with 2 x Nvidia Tesla V100 cards.


Getting an account

You can easily get an account valid throughout all CECI clusters, just follow their documentation here.


Access GRID ressources

At this time, there is no /cvmfs access, therefore no access to /pnfs resources.
You would have to use rsync to transfer files to/from T2B if needed.

Support and Feedback

As we do not manage any of the clusters, prefer contacting their support, adding us in CC if you want. If your cluster does not have what you need (like /cvmfs, etc), feel free to inform us, we will try to discuss with the other admins how to make this possible.
Also, as this mixed usage of our resources and GPUs from other clusters is rather new, we would appreciate any feedback you might have !