This lesson is in the early stages of development (Alpha version)

Introduction to High-Performance Computing

Why Use a Cluster?

Overview

Teaching: 15 min
Exercises: 5 min
Questions
  • Why would I be interested in High Performance Computing (HPC)?

  • What can I expect to learn from this course?

Objectives
  • Be able to describe what an HPC system is

  • Identify how an HPC system could benefit you.

Frequently, research problems that use computing can outgrow the desktop or laptop computer where they started:

In all these cases, what is needed is access to more computers than can be used at the same time.

And what do you do?

Talk to your neighbour, office mate or rubber duck about your research.

  • How does computing help you do your research?
  • How could more computing help you do more or better research?

A standard Laptop for standard tasks

Today, people coding or analysing data typically work with laptops.

/hpc-intro/A%20standard%20laptop

Let’s dissect what resources programs running on a laptop require:

Schematically, this can be reduced to the following:

/hpc-intro/Schematic%20of%20how%20a%20computer%20works

When tasks take too long

When the task to solve become heavy on computations, the operations are typically out-sourced from the local laptop or desktop to elsewhere. Take for example the task to find the directions for your next business trip. The capabilities of your laptop are typically not enough to calculate that route spontaneously. So you use website, which in turn runs on a server that is almost exclusively not in the same room as you are.

/hpc-intro/A%20rack%20half%20full%20with%20servers

Note here, that a server is mostly a noisy computer mounted into a rack cabinet which in turn resides in a data center. The internet made it possible that these data centers do not require to be nearby your laptop. What people call the cloud is mostly a web-service where you can rent such servers by providing your credit card details and by clicking together the specs of this remote resource.

The server itself has no direct display or input methods attached to it. But most importantly, it has much more storage, memory and compute capacity than your laptop will ever have. In any case, you need a local device (laptop, workstation, mobile phone or tablet) to interact with this remote machine, which people typically call ‘a server’.

When one server is not enough

If the computational task or analysis to complete is daunting for a single server, larger agglomerations of servers are used. These go by the name of clusters or super computers.

/hpc-intro/A%20rack%20with%20servers

The methodology of providing the input data, communicating options and flags as well as retrieving the results is quite different to using a plain laptop. Moreover, using a GUI style interface is often discarded in favor of using the command line. This imposes a double paradigm shift for prospect users:

  1. they work with the command line (not a GUI-style user interface)
  2. they work with a distributed set of computers (called nodes)

I’ve never used a server, have I?

Take a minute and think about which of your daily interactions with a computer may require a remote server or even cluster to provide you with results.

Key Points

  • High Performance Computing (HPC) typically involves connecting to very large computing systems elsewhere in the world.

  • These other systems can be used to do work that would either be impossible or much slower or smaller systems.

  • The standard method of interacting with such systems is via a command line interface called Bash.


Working on a remote HPC system

Overview

Teaching: 25 min
Exercises: 10 min
Questions
  • What is an HPC system?

  • How does an HPC system work?

  • How do I log on to a remote HPC system?

Objectives
  • Connect to a remote HPC system.

  • Understand the general HPC system architecture.

What is an HPC system?

The words “cloud”, “cluster”, and the phrase “high-performance computing” or “HPC” are used a lot in different contexts and with various related meanings. So what do they mean? And more importantly, how do we use them in our work?

The cloud is a generic term commonly used to refer to computing resources that are a) provisioned to users on demand or as needed and b) represent real or virtual resources that may be located anywhere on Earth. For example, a large company with computing resources in Brazil, Zimbabwe and Japan may manage those resources as its own internal cloud and that same company may also utilize commercial cloud resources provided by Amazon or Google. Cloud resources may refer to machines performing relatively simple tasks such as serving websites, providing shared storage, providing webservices (such as e-mail or social media platforms), as well as more traditional compute intensive tasks such as running a simulation.

The term HPC system, on the other hand, describes a stand-alone resource for computationally intensive workloads. They are typically comprised of a multitude of independent processing and storage elements, designed to handle high volumes of data and/or large numbers of floating-point operations (FLOPS) with the highest possible performance. For example, all of the machines on the Top-500 list are HPC systems. To support these constraints, an HPC resource must exist in a specific, fixed location: networking cables can only stretch so far, and electrical and optical signals can travel only so fast.

The word “cluster” is often used for small to moderate scale HPC resources less impressive than the Top-500. Clusters are often maintained in computing centers that support several such systems, all sharing common networking and storage to support common compute intensive tasks.

Logging in

Go ahead and log in to the cluster: Myriad at University College London.

[user@laptop ~]$ ssh yourUsername@myriad.rc.ucl.ac.uk

Remember to replace yourUsername with the username supplied by the instructors. You will be asked for your password. But watch out, the characters you type are not displayed on the screen.

You are logging in using a program known as the secure shell or ssh. This establishes a temporary encrypted connection between your laptop and myriad.rc.ucl.ac.uk. The word before the @ symbol, e.g. yourUsername here, is the user account name that Lola has access permissions for on the cluster.

Where do I get this ssh from ?

On Linux and/or macOS, the ssh command line utility is almost always pre-installed. Open a terminal and type ssh --help to check if that is the case.

At the time of writing, the openssh support on Microsoft is still very recent. Alternatives to this are putty, bitvise SSH, mRemoteNG or MobaXterm. Download it, install it and open the GUI. The GUI asks for your user name and the destination address or IP of the computer you want to connect to. Once provided, you will be queried for your password just like in the example above.

Where are we?

Very often, many users are tempted to think of a high-performance computing installation as one giant, magical machine. Sometimes, people will assume that the computer they’ve logged onto is the entire computing cluster. So what’s really happening? What computer have we logged on to? The name of the current computer we are logged onto can be checked with the hostname command. (You may also notice that the current hostname is also part of our prompt!)

[yourUsername@login12 ~]$  hostname
Myriad

Nodes

Individual computers that compose a cluster are typically called nodes (although you will also hear people call them servers, computers and machines). On a cluster, there are different types of nodes for different types of tasks. The node where you are right now is called the head node, login node or submit node. A login node serves as an access point to the cluster. As a gateway, it is well suited for uploading and downloading files, setting up software, and running quick tests. It should never be used for doing actual work.

The real work on a cluster gets done by the worker (or compute) nodes. Worker nodes come in many shapes and sizes, but generally are dedicated to long or hard tasks that require a lot of computational resources.

All interaction with the worker nodes is handled by a specialized piece of software called a scheduler (the scheduler used in this lesson is called ). We’ll learn more about how to use the scheduler to submit jobs next, but for now, it can also tell us more information about the worker nodes.

For example, we can view all of the worker nodes with the qhost command.

[yourUsername@login12 ~]$  qhost
    4 type * nodes: 36 cores, 188.4G RAM
    7 type B nodes: 36 cores,   1.5T RAM
   66 type D nodes: 36 cores, 188.4G RAM
    9 type E nodes: 36 cores, 188.4G RAM
    1 type F nodes: 36 cores, 188.4G RAM
    3 type H nodes: 36 cores, 172.7G RAM
   53 type H nodes: 36 cores, 188.4G RAM
    3 type I nodes: 36 cores,   1.5T RAM
    2 type J nodes: 36 cores, 188.4G RAM

There are also specialized machines used for managing disk storage, user authentication, and other infrastructure-related tasks. Although we do not typically logon to or interact with these machines directly, they enable a number of key features like ensuring our user account and files are available throughout the HPC system.

Shared file systems

This is an important point to remember: files saved on one node (computer) are often available everywhere on the cluster!

What’s in a node?

All of a HPC system’s nodes have the same components as your own laptop or desktop: CPUs (sometimes also called processors or cores), memory (or RAM), and disk space. CPUs are a computer’s tool for actually running programs and calculations. Information about a current task is stored in the computer’s memory. Disk refers to all storage that can be accessed like a file system. This is generally storage that can hold data permanently, i.e. data is still there even if the computer has been restarted.

/hpc-intro/Node%20anatomy

Explore Your Computer

Try to find out the number of CPUs and amount of memory available on your personal computer.

Solution

There are several ways to do this. Most operating systems have a graphical system monitor, like the Windows Task Manager. More detailed information can be found on the command line:

  • Run system utilities
    [user@laptop ~]$ nproc --all
    [user@laptop ~]$ free -m
    
  • Read from /proc
    [user@laptop ~]$ cat /proc/cpuinfo
    [user@laptop ~]$ cat /proc/meminfo
    
  • Run system monitor
    [user@laptop ~]$ htop
    

Explore The Head Node

Now compare the resources of your computer with those of the head node.

Solution

[user@laptop ~]$ ssh yourUsername@myriad.rc.ucl.ac.uk
[yourUsername@login12 ~]$  nproc --all
[yourUsername@login12 ~]$  free -m

You can get more information about the processors using lscpu, and a lot of detail about the memory by reading the file /proc/meminfo:

[yourUsername@login12 ~]$  less /proc/meminfo

Explore a Worker Node

Finally, let’s look at the resources available on the worker nodes where your jobs will actually run. Try running this command to see the name, CPUs and memory available on the worker nodes (the instructors will give you the ID of the compute node to use):

[yourUsername@login12 ~]$  qhost -h node-d00a-001

Compare Your Computer, the Head Node and the Worker Node

Compare your laptop’s number of processors and memory with the numbers you see on the cluster head node and worker node. Discuss the differences with your neighbor.

What implications do you think the differences might have on running your research work on the different systems and nodes?

Units and Language

A computer’s memory and disk are measured in units called Bytes (one Byte is 8 bits). As today’s files and memory have grown to be large given historic standards, volumes are noted using the SI prefixes. So 1000 Bytes is a Kilobyte (kB), 1000 Kilobytes is a Megabyte (MB), 1000 Megabytes is a Gigabyte (GB), etc.

History and common language have however mixed this notation with a different meaning. When people say “Kilobyte”, they mean 1024 Bytes instead. In that spirit, a Megabyte is 1024 Kilobytes.

To address this ambiguity, the International System of Quantities standardizes the binary prefixes (with base of 210=1024) by the prefixes Kibi (ki), Mibi (Mi), Gibi (Gi), etc. For more details, see here

Differences Between Nodes

Many HPC clusters have a variety of nodes optimized for particular workloads. Some nodes may have larger amount of memory, or specialized resources such as Graphical Processing Units (GPUs).

With all of this in mind, we will now cover how to talk to the cluster’s scheduler, and use it to start running our scripts and programs!

Key Points

  • An HPC system is a set of networked machines.

  • HPC systems typically provide login nodes and a set of worker nodes.

  • The resources found on independent (worker) nodes can vary in volume and type (amount of RAM, processor architecture, availability of network mounted file systems, etc.).

  • Files saved on one node are available on all nodes.


Scheduling jobs

Overview

Teaching: 45 min
Exercises: 30 min
Questions
  • What is a scheduler and why are they used?

  • How do I launch a program to run on any one node in the cluster?

  • How do I capture the output of a program that is run on a node in the cluster?

Objectives
  • Run a simple Hello World style program on the cluster.

  • Submit a simple Hello World style script to the cluster.

  • Use the batch system command line tools to monitor the execution of your job.

  • Inspect the output and error files of your jobs.

Job scheduler

An HPC system might have thousands of nodes and thousands of users. How do we decide who gets what and when? How do we ensure that a task is run with the resources it needs? This job is handled by a special piece of software called the scheduler. On an HPC system, the scheduler manages which jobs run where and when.

The following illustration compares these tasks of a job scheduler to a waiter in a restaurant. If you can relate to an instance where you had to wait for a while in a queue to get in to a popular restaurant, then you may now understand why sometimes your job do not start instantly as in your laptop.

/hpc-intro/Compare%20a%20job%20scheduler%20to%20a%20waiter%20in%20a%20restaurant

Job scheduling roleplay (optional)

Your instructor will divide you into groups taking on different roles in the cluster (users, compute nodes and the scheduler). Follow their instructions as they lead you through this exercise. You will be emulating how a job scheduling system works on the cluster.

notes for the instructor here

The scheduler used in this lesson is SGE. Although SGE is not used everywhere, running jobs is quite similar regardless of what software is being used. The exact syntax might change, but the concepts remain the same.

Running a batch job

The most basic use of the scheduler is to run a command non-interactively. Any command (or series of commands) that you want to run on the cluster is called a job, and the process of using a scheduler to run the job is called batch job submission.

In this case, the job we want to run is just a shell script. Let’s create a demo shell script to run as a test. The landing pad will have a number of terminal-based text editors installed. Use whichever you prefer. Unsure? nano is a pretty good, basic choice.

[yourUsername@login12 ~]$  cat example-job.sh
[yourUsername@login12 ~]$  chmod +x example-job.sh
#!/bin/bash -l

echo -n "This script is running on "
hostname

Creating our test job

Run the script. Does it execute on the cluster or just our login node?

Solution

[yourUsername@login12 ~]$  ./example-job.sh
This script is running on 

This job runs on the login node.

If you completed the previous challenge successfully, you probably realise that there is a distinction between running the job through the scheduler and just “running it”. To submit this job to the scheduler, we use the qsub command.

[yourUsername@login12 ~]$  qsub  example-job.sh
Your job 36855 ("example-job.sh") has been submitted

And that’s all we need to do to submit a job. Our work is done – now the scheduler takes over and tries to run the job for us. While the job is waiting to run, it goes into a list of jobs called the queue. To check on our job’s status, we check the queue using the command qstat -u yourUsername.

[yourUsername@login12 ~]$  qstat -u yourUsername
job-ID  prior   name	   user         state submit/start at     queue                          slots ja-task-ID
-----------------------------------------------------------------------------------------------------------------
3979883 3.50000 example-jo yourUser     r     06/25/2020 11:36:30 Arya@node-b00a-003                 1

We can see all the details of our job, most importantly that it is in the r or running state. Sometimes our jobs might need to wait in a queue (w or waiting) or have an error (E).

The best way to check our job’s status is with qstat. Of course, running qstat repeatedly to check on things can be a little tiresome. To see a real-time view of our jobs, we can use the watch command. watch reruns a given command at 2-second intervals. This is too frequent, and will likely upset your system administrator. You can change the interval to a more reasonable value, for example 15 seconds, with the -n 15 parameter. Let’s try using it to monitor another job.

[yourUsername@login12 ~]$  qsub  example-job.sh
[yourUsername@login12 ~]$  watch -n 15 qstat -u yourUsername

You should see an auto-updating display of your job’s status. When it finishes, it will disappear from the queue. Press Ctrl-C when you want to stop the watch command.

Where’s the output?

On the login node, this script printed output to the terminal – but when we exit watch, there’s nothing. Where’d it go?

Cluster job output is typically redirected to a file in the directory you launched it from. Use ls to find and read the file.

Customising a job

The job we just ran used all of the scheduler’s default options. In a real-world scenario, that’s probably not what we want. The default options represent a reasonable minimum. Chances are, we will need more cores, more memory, more time, among other special considerations. To get access to these resources we must customize our job script.

Comments in UNIX shell scripts (denoted by #) are typically ignored, but there are exceptions. For instance the special #! comment at the beginning of scripts specifies what program should be used to run it (you’ll typically see #!/bin/bash). Schedulers like SGE also have a special comment used to denote special scheduler-specific options. Though these comments differ from scheduler to scheduler, SGE’s special comment is #$ . Anything following the #$ comment is interpreted as an instruction to the scheduler.

Let’s illustrate this by example. By default, a job’s name is the name of the script, but the -N option can be used to change the name of a job. Add an option to the script:

[yourUsername@login12 ~]$  cat example-job.sh
#!/bin/bash -l
#$  -N new_name

echo -n "This script is running on "
hostname
echo "This script has finished successfully."

Submit the job (using qsub example-job.sh) and monitor it:

[yourUsername@login12 ~]$  qstat -u yourUsername
job-ID  prior   name       user         state submit/start at     queue                          slots ja-task-ID 
-----------------------------------------------------------------------------------------------------------------
38191   0.00000 new_name   yourUser     qw    06/25/2020 13:25:26                                    1

Fantastic, we’ve successfully changed the name of our job!

Setting up email notifications

Jobs on an HPC system might run for days or even weeks. We probably have better things to do than constantly check on the status of our job with qstat. Looking at the manual page for qsub, can you set up our test job to send you an email when it finishes?

Resource requests

But what about more important changes, such as the number of cores and memory for our jobs? One thing that is absolutely critical when working on an HPC system is specifying the resources required to run a job. This allows the scheduler to find the right time and place to schedule our job. If you do not specify requirements (such as the amount of time you need), you will likely be stuck with your site’s default resources, which is probably not what you want.

The following are several key resource requests:

Note that just requesting these resources does not make your job run faster! We’ll talk more about how to make sure that you’re using resources effectively in a later episode of this lesson.

Submitting resource requests

Submit a job that will use 1 full node and 1 minute of walltime.

Solution

[yourUsername@login12 ~]$  cat example-job.sh
#!/bin/bash -l
#$  -l h_rt= 00:01:10

echo -n "This script is running on "
sleep 60 # time in seconds
hostname
echo "This script has finished successfully."
[yourUsername@login12 ~]$  qsub  example-job.sh

Why are the SGE runtime and sleep time not identical?

Job environment variables

When SGE runs a job, it sets a number of environment variables for the job. One of these will let us check what directory our job script was submitted from. The SGE_O_WORKDIR variable is set to the directory from which our job was submitted. Using the SGE_O_WORKDIR variable, modify your job so that it prints (to stdout) the location from which the job was submitted.

Solution

Resource requests are typically binding. If you exceed them, your job will be killed. Let’s use walltime as an example. We will request 30 seconds of walltime, and attempt to run a job for two minutes.

[yourUsername@login12 ~]$  cat example-job.sh
#!/bin/bash -l
#$  -N long_job
#$  -l h_rt= 00:00:30

echo -n "This script is running on ..."
sleep 120 # time in seconds
hostname
echo "This script has finished successfully."

Submit the job and wait for it to finish. Once it is has finished, check the log file.

[yourUsername@login12 ~]$  qsub  example-job.sh
[yourUsername@login12 ~]$  watch -n 15 qstat -u yourUsername
[yourUsername@login12 ~]$  cat long_job.o*
This script is running on:
node-d00a-007.myriad.ucl.ac.uk

Our job was killed for exceeding the amount of resources it requested. Although this appears harsh, this is actually a feature. Strict adherence to resource requests allows the scheduler to find the best possible place for your jobs. Even more importantly, it ensures that another user cannot use more resources than they’ve been given. If another user messes up and accidentally attempts to use all of the cores or memory on a node, SGE will either restrain their job to the requested resources or kill the job outright. Other jobs on the node will be unaffected. This means that one user cannot mess up the experience of others, the only jobs affected by a mistake in scheduling will be their own.

Cancelling a job

Sometimes we’ll make a mistake and need to cancel a job. This can be done with the qdel command. Let’s submit a job and then cancel it using its job number (remember to change the walltime so that it runs long enough for you to cancel it before it is killed!).

[yourUsername@login12 ~]$  qsub  example-job.sh
[yourUsername@login12 ~]$  qstat -u yourUsername
Your job 38759 ("example-job.sh") has been submitted

job-ID  prior   name       user         state submit/start at     queue                          slots ja-task-ID 
-----------------------------------------------------------------------------------------------------------------
38759   0.00000 example-jo yourUser     qw    06/25/2020 14:27:46                                    1

Now cancel the job with its job number (printed in your terminal). A clean return of your command prompt indicates that the request to cancel the job was successful.

[yourUsername@login12 ~]$  qdel 38759
# ... Note that it might take a minute for the job to disappear from the queue ...
[yourUsername@login12 ~]$  qstat -u yourUsername
# ...(no output from qstat when there are no jobs to display)...

Cancelling multiple jobs

We can also cancel all of our jobs at once using the -u option. This will delete all jobs for a specific user (in this case us). Note that you can only delete your own jobs.

Try submitting multiple jobs and then cancelling them all with ` -u yourUsername`.

Solution

First, submit a trio of jobs:

[yourUsername@login12 ~]$  qsub  example-job.sh
[yourUsername@login12 ~]$  qsub  example-job.sh
[yourUsername@login12 ~]$  qsub  example-job.sh

Then, cancel them all:

[yourUsername@login12 ~]$  qdel -u yourUsername

Other types of jobs

Up to this point, we’ve focused on running jobs in batch mode. SGE also provides the ability to start an interactive session.

There are very frequently tasks that need to be done interactively. Creating an entire job script might be overkill, but the amount of resources required is too much for a login node to handle. A good example of this might be building a genome index for alignment with a tool like HISAT2. Fortunately, we can run these types of tasks as a one-off with qrsh.

For an interactive session, you reserve some compute nodes via the scheduler and then are logged in live, just like on the login nodes. These can be used for live visualisation, software debugging, or to work up a script to run your program without having to submit each attempt separately to the queue and wait for it to complete.

[yourUsername@login12 ~]$  qrsh -l mem=512M,h_rt=2:00:00

All qsub options are supported like regular job submission with the difference that with qrsh they must be given at the command line, and not with any job script. Once a node is allocated to you, you should be presented with a bash prompt. Note that the prompt will likely change to reflect your new location, in this case the worker node we are logged on. You can also verify this with hostname.

When you are done with the interactive job, type exit to quit your session.

Key Points

  • The scheduler handles how compute resources are shared between users.

  • Everything you do should be run through the scheduler.

  • A job is just a shell script.

  • If in doubt, request more resources than you will need.


Accessing software

Overview

Teaching: 30 min
Exercises: 15 min
Questions
  • How do we load and unload software packages?

Objectives
  • Understand how to load and use a software package.

On a high-performance computing system, it is often the case that no software is loaded by default. If we want to use a software package, we will need to “load” it ourselves.

Before we start using individual software packages, however, we should understand the reasoning behind this approach. The three biggest factors are:

Software incompatibility is a major headache for programmers. Sometimes the presence (or absence) of a software package will break others that depend on it. Two of the most famous examples are Python 2 and 3 and C compiler versions. Python 3 famously provides a python command that conflicts with that provided by Python 2. Software compiled against a newer version of the C libraries and then used when they are not present will result in a nasty 'GLIBCXX_3.4.20' not found error, for instance.

Software versioning is another common issue. A team might depend on a certain package version for their research project - if the software version was to change (for instance, if a package was updated), it might affect their results. Having access to multiple software versions allow a set of researchers to prevent software versioning issues from affecting their results.

Dependencies are where a particular software package (or even a particular version) depends on having access to another software package (or even a particular version of another software package). For example, the VASP materials science software may depend on having a particular version of the FFTW (Fastest Fourer Transform in the West) software library available for it to work.

Environment modules

Environment modules are the solution to these problems. A module is a self-contained description of a software package - it contains the settings required to run a software package and, usually, encodes required dependencies on other software packages.

There are a number of different environment module implementations commonly used on HPC systems: the two most common are TCL modules and Lmod. Both of these use similar syntax and the concepts are the same so learning to use one will allow you to use whichever is installed on the system you are using. In both implementations the module command is used to interact with environment modules. An additional subcommand is usually added to the command to specify what you want to do. For a list of subcommands you can use module -h or module help. As for all commands, you can access the full help on the man pages with man module.

On login you may start out with a default set of modules loaded or you may start out with an empty environment; this depends on the setup of the system you are using.

Listing currently loaded modules

You can use the module list command to see which modules you currently have loaded in your environment. If you have no modules loaded, you will see a message telling you so

[yourUsername@login12 ~]$  module list
No Modulefiles Currently Loaded.

Listing available modules

To see available software modules, use module avail

[yourUsername@login12 ~]$  module avail
---------------------- /shared/ucl/apps/modulefiles/core -----------------------
gerun             ops-tools/1.1.0   screen/4.2.1      userscripts/1.3.0
lm-utils/1.0      ops-tools/2.0.0   userscripts/1.0.0 userscripts/1.4.0
mrxvt/0.5.4       rcps-core/1.0.0   userscripts/1.1.0
ops-tools/1.0.0   rlwrap/0.43       userscripts/1.2.0

------------------ /shared/ucl/apps/modulefiles/applications -------------------
abaqus/2017
adf/2014.10
afni/20151030
afni/20181011
amber/14/mpi/intel-2015-update2
amber/14/openmp/intel-2015-update2
amber/14/serial/intel-2015-update2
amber/16/mpi/gnu-4.9.2

[output truncated]

Loading and unloading software

To load a software module, use module load. In this example we will use Python 3.

Initially, Python 3 is not loaded. We can test this by using the which command. which looks for programs the same way that Bash does, so we can use it to tell us where a particular piece of software is stored.

[yourUsername@login12 ~]$  which python3
/usr/bin/which: no python3 in (/shared/ucl/apps/python/3.6.3/gnu-4.9.2/bin:/shared/ucl/apps/intel-mpi/ucl-wrapper/bin:/shared/ucl/apps/intel/2018.Update3/impi/2018.3.222/intel64/bin:/shared/ucl/apps/intel/2018.Update3/debugger_2018/gdb/intel64_mic/bin:/shared/ucl/apps/intel/2018.Update3/compilers_and_libraries_2018.3.222/linux/mpi/intel64/bin:/shared/ucl/apps/intel/2018.Update3/compilers_and_libraries_2018.3.222/linux/bin/intel64:/shared/ucl/apps/cluster-bin:/shared/ucl/apps/cluster-scripts:/shared/ucl/apps/mrxvt/0.5.4/bin:/shared/ucl/apps/tmux/2.2/gnu-4.9.2/bin:/shared/ucl/apps/emacs/24.5/gnu-4.9.2/bin:/shared/ucl/apps/giflib/5.1.1/gnu-4.9.2/bin:/shared/ucl/apps/dos2unix/7.3/gnu-4.9.2/bin:/shared/ucl/apps/nano/2.4.2/gnu-4.9.2/bin:/shared/ucl/apps/apr-util/1.5.4/bin:/shared/ucl/apps/apr/1.5.2/bin:/shared/ucl/apps/git/2.19.1/gnu-4.9.2/bin:/shared/ucl/apps/flex/2.5.39/gnu-4.9.2/bin:/shared/ucl/apps/cmake/3.13.3/gnu-4.9.2/bin:/shared/ucl/apps/gcc/4.9.2/bin:/opt/sge/bin:/opt/sge/bin/lx-amd64:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/ibutils/bin)

We can load the python3 command with module load:

[yourUsername@login12 ~]$  module load python
[yourUsername@login12 ~]$  which python3
/shared/ucl/apps/python/3.6.3/gnu-4.9.2/bin/python3

So, what just happened?

To understand the output, first we need to understand the nature of the $PATH environment variable. $PATH is a special environment variable that controls where a UNIX system looks for software. Specifically $PATH is a list of directories (separated by :) that the OS searches through for a command before giving up and telling us it can’t find it. As with all environment variables we can print it out using echo.

[yourUsername@login12 ~]$  echo $PATH
/shared/ucl/apps/python/3.6.3/gnu-4.9.2/bin:/shared/ucl/apps/intel-mpi/ucl-wrapper/bin:/shared/ucl/apps/intel/2018.Update3/impi/2018.3.222/intel64/bin:/shared/ucl/apps/intel/2018.Update3/debugger_2018/gdb/intel64_mic/bin:/shared/ucl/apps/intel/2018.Update3/compilers_and_libraries_2018.3.222/linux/mpi/intel64/bin:/shared/ucl/apps/intel/2018.Update3/compilers_and_libraries_2018.3.222/linux/bin/intel64:/shared/ucl/apps/cluster-bin:/shared/ucl/apps/cluster-scripts:/shared/ucl/apps/mrxvt/0.5.4/bin:/shared/ucl/apps/tmux/2.2/gnu-4.9.2/bin:/shared/ucl/apps/emacs/24.5/gnu-4.9.2/bin:/shared/ucl/apps/giflib/5.1.1/gnu-4.9.2/bin:/shared/ucl/apps/dos2unix/7.3/gnu-4.9.2/bin:/shared/ucl/apps/nano/2.4.2/gnu-4.9.2/bin:/shared/ucl/apps/apr-util/1.5.4/bin:/shared/ucl/apps/apr/1.5.2/bin:/shared/ucl/apps/git/2.19.1/gnu-4.9.2/bin:/shared/ucl/apps/flex/2.5.39/gnu-4.9.2/bin:/shared/ucl/apps/cmake/3.13.3/gnu-4.9.2/bin:/shared/ucl/apps/gcc/4.9.2/bin:/opt/sge/bin:/opt/sge/bin/lx-amd64:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/ibutils/bin

You’ll notice a similarity to the output of the which command. In this case, there’s only one difference: the different directory at the beginning. When we ran the module load command, it added a directory to the beginning of our $PATH. Let’s examine what’s there:

[yourUsername@login12 ~]$  ls /shared/ucl/apps/python/3.6.3/gnu-4.9.2/bin
[output truncated]

conda-convert                       gio-querymodules        jupyter-run               python                tiff2rgba
conda-develop                       glacier                 jupyter-serverextension   python3               tiffcmp
conda-env                           glib-compile-resources  jupyter-troubleshoot      python3.6             tiffcp
conda-index                         glib-compile-schemas    jupyter-trust             python3.6-config      tiffcrop
conda-inspect                       glib-genmarshal         kill_instance             python3.6m            tiffdither
conda-metapackage                   glib-gettextize         launch_instance           python3.6m-config     tiffdump
conda-render                        glib-mkenums            lconvert                  python3-config        tiffinfo
conda-server                        gobject-query           libpng16-config           pyuic5                tiffmedian
conda-skeleton                      gresource               libpng-config             pyvenv                tiffset
elbadmin                            hb-view                 patchelf                  rst2man.py

[output truncated]

Taking this to its conclusion, module load will add software to your $PATH. It “loads” software. A special note on this - depending on which version of the module program that is installed at your site, module load will also load required software dependencies.

To demonstrate, let’s load the ansys module and then use the module list command to show which modules we currently have loaded in our environment. (ANSYS is an engineering simulation product.)

[yourUsername@login12 ~]$  module load ansys
ansys/2019.r3(73):ERROR:151: Module 'ansys/2019.r3' depends on one of the module(s) 'giflib/5.1.1'
ansys/2019.r3(73):ERROR:102: Tcl command execution failed: prereq   giflib/5.1.1

This shows that the default ansys module will not run because it first needs giflib/5.1.1 to be loaded. Some HPC systems will automatically load dependencies like this, but at the time of writing (June 2020) UCL’s Myriad does not.

Let’s load the giflib module:

[yourUsername@login12 ~]$  module load giflib
giflib/5.1.1(18):ERROR:151: Module 'giflib/5.1.1' depends on one of the module(s) 'gcc-libs/4.9.2'
giflib/5.1.1(18):ERROR:102: Tcl command execution failed: prereq gcc-libs

Here, we see that the giflib module itself also has a dependency, gcc-libs. So we have to load that first, then load giflib, and then finally load ansys.

[yourUsername@login12 ~]$  module load gcc-libs/4.9.2
[yourUsername@login12 ~]$  module load giflib/5.1.1
[yourUsername@login12 ~]$  module load ansys
~/Scratch/.config is configured
...
...
~/.mw doesn't exist - creating

If you now use the module list command, you should see these three modules included in the list.

To unload a specific module, e.g. ansys, run the command module unload ansys. (On some systems, this will also unload the modules it depends on. Currently this is not the case with Myriad.)

If we wanted to unload everything at once (all modules), we could run module purge (unloads everything).

[yourUsername@login12 ~]$  module purge
[yourUsername@login12 ~]$  module list
No Modulefiles Currently Loaded.

Software versioning

So far, we’ve learned how to load and unload software packages. This is very useful. However, we have not yet addressed the issue of software versioning. At some point or other, you will run into issues where only one particular version of some software will be suitable. Perhaps a key bugfix only happened in a certain version, or version X broke compatibility with a file format you use. In either of these example cases, it helps to be very specific about what software is loaded.

Let’s examine the output of module avail more closely.

[yourUsername@login12 ~]$  module avail
---------------------- /shared/ucl/apps/modulefiles/core -----------------------
gerun             ops-tools/1.1.0   screen/4.2.1      userscripts/1.3.0
lm-utils/1.0      ops-tools/2.0.0   userscripts/1.0.0 userscripts/1.4.0
mrxvt/0.5.4       rcps-core/1.0.0   userscripts/1.1.0
ops-tools/1.0.0   rlwrap/0.43       userscripts/1.2.0

------------------ /shared/ucl/apps/modulefiles/applications -------------------
abaqus/2017
adf/2014.10
afni/20151030
afni/20181011
amber/14/mpi/intel-2015-update2
amber/14/openmp/intel-2015-update2
amber/14/serial/intel-2015-update2
amber/16/mpi/gnu-4.9.2

[output truncated]

To be more specific, you can specify the particular software you want. e.g.

[yourUsername@login12 ~]$  module avail stata
------------------------------------- /shared/ucl/apps/modulefiles/applications --------------------------------------
stata/14 stata/15

Let’s take a closer look at the matlab module. Matlab is a widely-used piece of software which uses matrix multiplication. As we shall see, there are different versions available, and we want to make sure the one we use is the correct one for our purposes.

Let’s see which versions we have access to.

[yourUsername@login12 ~]$  module avail matlab
------------------------------------- /shared/ucl/apps/modulefiles/applications --------------------------------------
matlab/full/r2015a/8.5 matlab/full/r2016b/9.1 matlab/full/r2018a/9.4 matlab/full/r2019b/9.7
matlab/full/r2015b/8.6 matlab/full/r2017a/9.2 matlab/full/r2018b/9.5

In this case, there are seven different versions. How do we load each copy, and which copy is the default?

Sometimes, on some systems, a module might have a (default) next to it. This indicates that it is the default (i.e. which would be loaded if we type module load matlab). In this case, we don’t see this, so we will have to load matlab and see what we get.

[yourUsername@login12 ~]$  module load matlab
matlab/full/r2019b/9.7(99):ERROR:151: Module 'matlab/full/r2019b/9.7' depends on one of the module(s) 'gcc-libs/4.9.2'
matlab/full/r2019b/9.7(99):ERROR:102: Tcl command execution failed: prereq gcc-libs

Here, we see that the default version of Matlab on the system is r2019b/9.7, which in this case is the most recent version. However, you should not assume that the default version is necessarily the latest.

As we saw in the earlier example, there are one or more dependencies.

Suppose we decide to load an earlier version of Matlab, e.g. r2017a/9.2.

[yourUsername@login12 ~]$  module purge
[yourUsername@login12 ~]$  module list
No Modulefiles Currently Loaded.
[yourUsername@login12 ~]$  module load matlab/full/r2017a/9.2
matlab/full/r2017a/9.2(96):ERROR:151: Module 'matlab/full/r2017a/9.2' depends on one of the module(s) 'gcc-libs/4.9.2'
matlab/full/r2017a/9.2(96):ERROR:102: Tcl command execution failed: prereq gcc-libs
[yourUsername@login12 ~]$  module load gcc-libs/4.9.2
[yourUsername@login12 ~]$  module load matlab/full/r2017a/9.2
matlab/full/r2017a/9.2(97):ERROR:151: Module 'matlab/full/r2017a/9.2' depends on one of the module(s) 'xorg-utils/X11R7.7'
matlab/full/r2017a/9.2(97):ERROR:102: Tcl command execution failed: prereq xorg-utils/X11R7.7
[yourUsername@login12 ~]$  module load xorg-utils/X11R7.7
[yourUsername@login12 ~]$  module load matlab/full/r2017a/9.2
~/.matlab is a symbolic link pointing to /home/yourUsername/Scratch/.matlab

Matlab setup complete type matlab to start Matlab.
[yourUsername@login12 ~]$  matlab -nodisplay -nosplash -nodesktop
                                               < M A T L A B (R) >
                                     Copyright 1984-2017 The MathWorks, Inc.
                                      R2017a (9.2.0.556344) 64-bit (glnxa64)
                                                  March 27, 2017

 
To get started, type one of these: helpwin, helpdesk, or demo.
For product information, visit www.mathworks.com.
 
>> quit

Note that you cannot load two different versions of the same software at once. Currently, we have loaded matlab/full/r2017a/9.2. Let’s try also loading matlab/full/r2015b/8.6:

[yourUsername@login12 ~]$  module load matlab/full/r2015b/8.6
matlab/full/r2015b/8.6(108):ERROR:150: Module 'matlab/full/r2015b/8.6' conflicts with the currently loaded module(s) 'matlab/full/r2017a/9.2'
matlab/full/r2015b/8.6(108):ERROR:102: Tcl command execution failed: conflict matlab

As we can see, we get an error message about conflicts. If we do indeed wish to load version r2015b/8.6, we can say

[yourUsername@login12 ~]$  module unload matlab/full/r2017a/9.2
[yourUsername@login12 ~]$  module load matlab/full/r2015b/8.6

or, in one step:

[yourUsername@login12 ~]$  module swap matlab matlab/full/r2015b/8.6

Check that this module has been loaded:

[yourUsername@login12 ~]$  module list
Currently Loaded Modulefiles:
  1) gcc-libs/4.9.2                 9) gerun                         17) userscripts/1.4.0
  ...                              ...                               ...
  6) apr-util/1.5.4                14) emacs/24.5                    22) xorg-utils/X11R7.7
  7) subversion/1.8.13             15) tmux/2.2                      23) matlab/full/r2015b/8.6
  8) screen/4.2.1                  16) mrxvt/0.5.4

Using software modules in scripts

Create a job that is able to run python3 --version. Remember, no software is loaded by default! Running a job is just like logging on to the system (you should not assume a module loaded on the login node is loaded on a compute node).

Solution

Installing software of our own

Most HPC clusters have a pretty large set of preinstalled software. Nonetheless, it’s unlikely that all of the software we’ll need will be available. Sooner or later, we’ll need to install some software of our own.

Though software installation differs from package to package, the general process is the same: download the software, read the installation instructions (important!), install dependencies, compile, then start using our software.

As an example we will install the bioinformatics toolkit seqtk. We’ll first need to obtain the source code from GitHub using git.

[yourUsername@login12 ~]$  git clone https://github.com/lh3/seqtk.git
Cloning into 'seqtk'...
remote: Enumerating objects: 14, done.
remote: Counting objects: 100% (14/14), done.
remote: Compressing objects: 100% (10/10), done.
remote: Total 353 (delta 7), reused 11 (delta 4), pack-reused 339
Receiving objects: 100% (353/353), 169.79 KiB | 5.48 MiB/s, done.
Resolving deltas: 100% (202/202), done.

Now, using the instructions in the README.md file, all we need to do to complete the install is to cd into the seqtk folder and run the command make.

[yourUsername@login12 ~]$  cd seqtk
[yourUsername@login12 ~]$  less README.md
[yourUsername@login12 ~]$  make
gcc -g -Wall -O2 -Wno-unused-function seqtk.c -o seqtk -lz -lm
seqtk.c: In function ‘stk_comp’:
seqtk.c:400:16: warning: variable ‘lc’ set but not used [-Wunused-but-set-variable]
    int la, lb, lc, na, nb, nc, cnt[11];
                ^

It’s done! Now all we need to do to use the program is invoke it like any other program.

[yourUsername@login12 ~]$  ./seqtk
Usage:   seqtk <command> <arguments>
Version: 1.2-r101-dirty

Command: seq       common transformation of FASTA/Q
         comp      get the nucleotide composition of FASTA/Q
         sample    subsample sequences
         subseq    extract subsequences from FASTA/Q
         fqchk     fastq QC (base/quality summary)
         mergepe   interleave two PE FASTA/Q files
         trimfq    trim FASTQ using the Phred algorithm

         hety      regional heterozygosity
         gc        identify high- or low-GC regions
         mutfa     point mutate FASTA at specified positions
         mergefa   merge two FASTA/Q files
         famask    apply a X-coded FASTA to a source FASTA
         dropse    drop unpaired from interleaved PE FASTA/Q
         rename    rename sequence names
         randbase  choose a random base from hets
         cutN      cut sequence at long N
         listhet   extract the position of each het
         hpc       homopolyer-compressed sequence

We’ve successfully built our first piece of software on the cluster!

Key Points

  • Load software with module load softwareName

  • Unload software with module purge

  • The module system handles software versioning and package conflicts for you automatically.

  • You can edit your .bashrc file to automatically load a software package.


Transferring files

Overview

Teaching: 30 min
Exercises: 10 min
Questions
  • How do I upload/download files to the cluster?

Objectives
  • Be able to transfer files to and from a computing cluster.

Computing with a remote computer offers very limited use if we cannot get files to or from the cluster. There are several options for transferring data between computing resources, from command line options to GUI programs, which we will cover here.

Download files from the internet using wget

One of the most straightforward ways to download files is to use wget. Any file that can be downloaded in your web browser with an accessible link can be downloaded using wget. This is a quick way to download datasets or source code.

The syntax is: wget https://some/link/to/a/file.tar.gz. For example, download the lesson sample files using the following command:

[yourUsername@login12 ~]$  wget http://rits.github-pages.ucl.ac.uk/hpc-intro/files/bash-lesson.tar.gz

Transferring single files and folders with scp

To copy a single file to or from the cluster, we can use scp (“secure copy”). The syntax can be a little complex for new users, but we’ll break it down.

To transfer to another computer:

[user@laptop ~]$ scp path/to/local/file.txt yourUsername@myriad.rc.ucl.ac.uk:path/on/Myriad

Transfer a file

Create a “calling card” with your name and email address, then transfer it to your home directory on Myriad.

Solution

Create a file like this, with your name (or an alias) and top-level domain:

[user@laptop ~]$ cat calling-card.txt
Your Name
Your.Address@institution.tld

Now, transfer it to Myriad:

[user@laptop ~]$ scp calling-card.txt yourUsername@myriad.rc.ucl.ac.uk:~/
calling-card.txt                                                 100%   37     7.6 KB/s   00:00

To download from another computer:

[user@laptop ~]$ scp yourUsername@myriad.rc.ucl.ac.uk:path/on/Myriad/file.txt path/to/local/

Note that we can simplify doing this by shortening our paths. On the remote computer, everything after the : is relative to our home directory. We can simply just add a : and leave it at that if we don’t care where the file goes.

[user@laptop ~]$ scp local-file.txt yourUsername@myriad.rc.ucl.ac.uk:

To recursively copy a directory, we just add the -r (recursive) flag:

[user@laptop ~]$ scp -r some-local-folder/ yourUsername@myriad.rc.ucl.ac.uk:target-directory/

A note on rsync

As you gain experience with transferring files, you may find the scp command limiting. The rsync utility provides advanced features for file transfer and is typically faster compared to both scp and sftp (see below). It is especially useful for transferring large and/or many files and creating synced backup folders.

The syntax is similar to scp. To transfer to another computer with commonly used options:

[user@laptop ~]$ rsync -avzP path/to/local/file.txt yourUsername@myriad.rc.ucl.ac.uk:path/on/Myriad

The a (archive) option preserves file timestamps and permissions among other things; the v (verbose) option gives verbose output to help monitor the transfer; the z (compression) option compresses the file during transit to reduce size and transfer time; and the P (partial/progress) option preserves partially transferred files in case of an interruption and also displays the progress of the transfer.

To recursively copy a directory, we can use the same options:

[user@laptop ~]$ rsync -avzP path/to/local/dir yourUsername@myriad.rc.ucl.ac.uk:path/on/Myriad

The a (archive) option implies recursion.

To download a file, we simply change the source and destination:

[user@laptop ~]$ rsync -avzP yourUsername@myriad.rc.ucl.ac.uk:path/on/Myriad/file.txt path/to/local/

Transferring files interactively with FileZilla

FileZilla is a cross-platform client for downloading and uploading files to and from a remote computer. It is absolutely fool-proof and always works quite well. It uses the sftp protocol. You can read more about using the sftp protocol in the command line here.

Download and install the FileZilla client from https://filezilla-project.org. After installing and opening the program, you should end up with a window with a file browser of your local system on the left hand side of the screen. When you connect to the cluster, your cluster files will appear on the right hand side.

To connect to the cluster, we’ll just need to enter our credentials at the top of the screen:

Hit “Quickconnect” to connect. You should see your remote files appear on the right hand side of the screen. You can drag-and-drop files between the left (local) and right (remote) sides of the screen to transfer files.

Archiving files

One of the biggest challenges we often face when transferring data between remote HPC systems is that of large numbers of files. There is an overhead to transferring each individual file and when we are transferring large numbers of files these overheads combine to slow down our transfers to a large degree.

The solution to this problem is to archive multiple files into smaller numbers of larger files before we transfer the data to improve our transfer efficiency. Sometimes we will combine archiving with compression to reduce the amount of data we have to transfer and so speed up the transfer.

The most common archiving command you will use on a (Linux) HPC cluster is tar. tar can be used to combine files into a single archive file and, optionally, compress. For example, to collect all files contained inside output_data into an archive file called output_data.tar we would use:

[user@laptop ~]$ tar -cvf output_data.tar output_data/

The options we used for tar are:

The tar command allows users to concatenate flags. Instead of typing tar -c -v -f, we can use tar -cvf. We can also use the tar command to extract the files from the archive once we have transferred it:

[user@laptop ~]$ tar -xvf output_data.tar

This will put the data into a directory called output_data. Be careful, it will overwrite data there if this directory already exists!

Sometimes you may also want to compress the archive to save space and speed up the transfer. However, you should be aware that for large amounts of data compressing and un-compressing can take longer than transferring the un-compressed data so you may not want to transfer. To create a compressed archive using tar we add the -z option and add the .gz extension to the file to indicate it is gzip-compressed, e.g.:

[user@laptop ~]$ tar -czvf output_data.tar.gz output_data/

The tar command is used to extract the files from the archive in exactly the same way as for uncompressed data as tar recognizes it is compressed and un-compresses and extracts at the same time:

[user@laptop ~]$ tar -xvf output_data.tar.gz

Transferring files

Using one of the above methods, try transferring files to and from the cluster. Which method do you like the best?

Working with Windows

When you transfer files to from a Windows system to a Unix system (Mac, Linux, BSD, Solaris, etc.) this can cause problems. Windows encodes its files slightly different than Unix, and adds an extra character to every line.

On a Unix system, every line in a file ends with a \n (newline). On Windows, every line in a file ends with a \r\n (carriage return + newline). This causes problems sometimes.

Though most modern programming languages and software handles this correctly, in some rare instances, you may run into an issue. The solution is to convert a file from Windows to Unix encoding with the dos2unix command.

You can identify if a file has Windows line endings with cat -A filename. A file with Windows line endings will have ^M$ at the end of every line. A file with Unix line endings will have $ at the end of a line.

To convert the file, just run dos2unix filename. (Conversely, to convert back to Windows format, you can run unix2dos filename.)

A note on ports

All file transfers using the above methods use encrypted communication over port 22. This is the same connection method used by SSH. In fact, all file transfers using these methods occur through an SSH connection. If you can connect via SSH over the normal port, you will be able to transfer files.

Key Points

  • wget downloads a file from the internet.

  • scp transfer files to and from your computer.

  • You can use an SFTP client like FileZilla to transfer files through a GUI.


Using resources effectively

Overview

Teaching: 15 min
Exercises: 10 min
Questions
  • How do we monitor our jobs?

  • How can I get my jobs scheduled more easily?

Objectives
  • Understand how to look up job statistics and profile code.

  • Understand job size implications.

We now know virtually everything we need to know about getting stuff on a cluster. We can log on, submit different types of jobs, use pre-installed software, and install and use software of our own. What we need to do now is use the systems effectively.

Estimating required resources using the scheduler

Although we covered requesting resources from the scheduler earlier, how do we know how much and what type of resources we will need in the first place?

Answer: we don’t. Not until we’ve tried it ourselves at least once. We’ll need to benchmark our job and experiment with it before we know how much it needs in the way of resources.

The most effective way of figuring out how much resources a job needs is to submit a test job, and then ask the scheduler how many resources it used.

A good rule of thumb is to ask the scheduler for more time and memory than you expect your job to need. This ensures that minor fluctuations in run time or memory use will not result in your job being cancelled by the scheduler. Recommendations for how much extra to ask for vary but 10% is probably the minimum, with 20-30% being more typical. Keep in mind that if you ask for too much, your job may not run even though enough resources are available, because the scheduler will be waiting to match what you asked for.

Benchmarking fastqc

Create a job that runs the following command in the same directory as the .fastq files

[yourUsername@login12 ~]$  fastqc name_of_fastq_file

The fastqc command is provided by the fastqc module. You’ll need to figure out a good amount of resources to allocate for this first “test run”. You might also want to have the scheduler email you to tell you when the job is done.

Hint: The job only needs 1 CPU and not too much memory or time. The trick is figuring out just how much you’ll need!

Solution

First, write the SGE script to run fastqc on the file supplied at the command-line.

[yourUsername@login12 ~]$  cat fastqc-job.sh
#!/bin/bash -l
#$  -l h_rt= 00:10:00

fastqc $1

Now, create and run a script to launch a job for each .fastq file.

[yourUsername@login12 ~]$  cat fastqc-launcher.sh
for f in *.fastq
do
    qsub  fastqc-job.sh $f
done 
[yourUsername@login12 ~]$  chmod +x fastqc-launcher.sh
[yourUsername@login12 ~]$  ./fastqc-launcher.sh

Once the job completes (note that it takes much less time than expected), we can query the scheduler to see how long our job took and what resources were used. We will use jobhist to get statistics about our job.

[yourUsername@login12 ~]$  jobhist
        FSTIME        |       FETIME        |   HOSTNAME    |  OWNER  | JOB NUMBER | TASK NUMBER | EXIT STATUS |  JOB NAME   
----------------------+---------------------+---------------+---------+------------+-------------+-------------+-------------
  2020-07-02 15:37:56 | 2020-07-02 15:37:58 | node-f00a-001 | YourUser|       1965 |           0 |           0 | Serial_Job

This shows all the jobs we ran recently (note that there are multiple entries per job). To get info about a specific job, we change command slightly.

[yourUsername@login12 ~]$  jobhist -j 1965

It will show a lot of info, in fact, every single piece of info collected on your job by the scheduler. It may be useful to redirect this information to less to make it easier to view (use the left and right arrow keys to scroll through fields).

[yourUsername@login12 ~]$  jobhist -j 1965 | less

Some interesting fields include the following:

Measuring the statistics of currently running tasks

Connecting to Nodes

Typically, clusters allow users to connect directly to compute nodes from the head node. This is useful to check on a running job and see how it’s doing, but is not a recommended practice in general, because it bypasses the resource manager.

If you need to do this, check where a job is running with qstat, then run ssh nodename.

Give it a try!

Solution

[yourUsername@login12 ~]$  ssh node-d00a-001

We can also check on stuff running on the login node right now the same way (so it’s not necessary to ssh to a node for this example).

Monitor system processes with top

The most reliable way to check current system stats is with top. Some sample output might look like the following (Ctrl + c to exit):

[yourUsername@login12 ~]$  top
top - 16:28:49 up 47 days,  5:33, 96 users,  load average: 53.87, 55.82, 50.47
Tasks: 1226 total,  31 running, 1181 sleeping,  10 stopped,   4 zombie
%Cpu(s): 66.8 us, 33.2 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 19754995+total, 13150139+free, 21139988 used, 44908560 buff/cache
KiB Swap: 21242220+total, 20060854+free, 11813660 used. 17565382+avail Mem 

   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                             
145836 richard   20   0 5230196   3.8g   1204 R  2446  2.0 683:15.69 bowtie2-align-s                                     
 71877 agape     20   0   46372   4100    932 R  81.9  0.0   9:37.41 rsync                                               
205211 logos     20   0 1072236 524576   6552 R  79.9  0.3   0:07.12 python                                              
205224 peter     20   0 1067448 520076   6612 R  77.3  0.3   0:07.06 python                                              
205212 paul      20   0  993228 445776   6556 R  55.3  0.2   0:06.04 python 
 74051 paul      20   0   48816   2708    496 S  35.6  0.0   8:42.12 rsync                                               
 58157 hezekia   20   0  129612   2848   1140 S   2.3  0.0 975:04.49 htop                                                
124495 samuel    20   0  136188   3396   1152 S   2.3  0.0   1078:34 htop                                                
 91884 lydia     20   0  933260 241984   9040 S   1.7  0.1   4:32.68 ipython                                             
  2628 root      20   0       0      0      0 S   1.3  0.0  92:11.14 ptlrpcd_00_0                            

Overview of the most important fields:

htop provides a curses-based overlay for top, producing a better-organized and “prettier” dashboard in your terminal. Unfortunately, it is not always available. If this is the case, politely ask your system administrators to install it for you.

Check memory load with free

Another useful tool is the free -h command. This will show the currently used/free amount of memory.

[yourUsername@login12 ~]$  free -h
              total        used        free      shared  buff/cache   available
Mem:           188G        109G         54G        528K         24G         78G
Swap:          202G         11G        191G

The key fields here are total, used, and available - which represent the amount of memory that the machine has in total, how much is currently being used, and how much is still available. When a computer runs out of memory it will attempt to use “swap” space on your hard drive instead. Swap space is very slow to access - a computer may appear to “freeze” if it runs out of memory and begins using swap. However, compute nodes on HPC systems usually have swap space disabled so when they run out of memory you usually get an “Out Of Memory (OOM)” error instead.

ps

To show all processes from your current session, type ps.

[yourUsername@login12 ~]$  ps
  PID TTY          TIME CMD
15113 pts/5    00:00:00 bash
15218 pts/5    00:00:00 ps

Note that this will only show processes from our current session. To show all processes you own (regardless of whether they are part of your current session or not), you can use ps ux.

[yourUsername@login12 ~]$  ps ux
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
auser  67780  0.0  0.0 149140  1724 pts/81   R+   13:51   0:00 ps ux
auser  73083  0.0  0.0 142392  2136 ?        S    12:50   0:00 sshd: auser@pts/81
auser  73087  0.0  0.0 114636  3312 pts/81   Ss   12:50   0:00 -bash

This is useful for identifying which processes are doing what.

Killing processes

To kill all of a certain type of process, you can run killall commandName. For example,

[yourUsername@login12 ~]$  killall rsession

would kill all rsession processes created by RStudio. Note that you can only kill your own processes.

You can also kill processes by their PIDs. For example, your ssh connection to the server is listed above with PID 73083. If you wish to close that connection forcibly, you could kill 73083.

Sometimes, killing a process does not work instantly. To kill the process in the most aggressive manner possible, use the -9 flag, i.e., kill -9 73083. It’s recommended to kill using without -9 first: this sends the process a “terminate” signal (SIGTERM), giving it the chance to clean up child processes and exit cleanly. However, if a process just isn’t responding, use -9 to terminate it instantly (SIGKILL).

Key Points

  • The smaller your job, the faster it will schedule.


Using shared resources responsibly

Overview

Teaching: 15 min
Exercises: 5 min
Questions
  • How can I be a responsible user?

  • How can I protect my data?

  • How can I best get large amounts of data off an HPC system?

Objectives
  • Learn how to be a considerate shared system citizen.

  • Understand how to protect your critical data.

  • Appreciate the challenges with transferring large amounts of data off HPC systems.

  • Understand how to convert many files to a single archive file using tar.

One of the major differences between using remote HPC resources and your own system (e.g. your laptop) is that they are a shared resource. How many users the resource is shared between at any one time varies from system to system but it is unlikely you will ever be the only user logged into or using such a system.

We have already mentioned one of the consequences of this shared nature of the resources: the scheduling system where you submit your jobs, but there are other things you need to consider in order to be a considerate HPC citizen, to protect your critical data and to transfer data

Be kind to the login nodes

The login node is often very busy managing lots of users logged in, creating and editing files and compiling software! It doesn’t have any extra space to run computational work.

Don’t run jobs on the login node (though quick tests are generally fine). A “quick test” is generally anything that uses less than 5 minutes of time. If you use too much resource then other users on the login node will start to be affected - their login sessions will start to run slowly and may even freeze or hang.

Login nodes are a shared resource

Remember, the login node is shared with all other users and your actions could cause issues for other people. Think carefully about the potential implications of issuing commands that may use large amounts of resource.

You can always use the commands top and ps ux to list the processes you are running on a login node and the amount of CPU and memory they are using. The kill command can be used along with the PID to terminate any processes that are using large amounts of resource.

Login Node Etiquette

Which of these commands would probably be okay to run on the login node?

  1. python physics_sim.py
  2. make
  3. create_directories.sh
  4. molecular_dynamics_2
  5. tar -xzf R-3.3.0.tar.gz

Solution

Building software, creating directories, and unpacking software are common and acceptable tasks for the login node: options #2 (make), #3 (mkdir), and #5 (tar) are probably OK. Note that script names do not always reflect their contents: before launching #3, please less create_directories.sh and make sure it’s not a Trojan horse.

Running resource-intensive applications is frowned upon. Unless you have cleared it with the system administrators, do not run #1 (python) or #4 (custom MD code).

If you experience performance issues with a login node you should report it to the system staff (usually via the helpdesk) for them to investigate. You can use the top command to see which users are using which resources.

Test before scaling

Remember that you are generally charged for usage on shared systems. A simple mistake in a job script can end up costing a large amount of resource budget. Imagine a job script with a mistake that makes it sit doing nothing for 24 hours on 1000 cores or one where you have requested 2000 cores by mistake and only use 100 of them! This problem can be compounded when people write scripts that automate job submission (for example, when running the same calculation or analysis over lots of different input). When this happens it hurts both you (as you waste lots of charged resource) and other users (who are blocked from accessing the idle compute nodes).

On very busy resources you may wait many days in a queue for your job to fail within 10 seconds of starting due to a trivial typo in the job script. This is extremely frustrating! Most systems provide dedicated resources for testing that have short wait times to help you avoid this issue.

Test job submission scripts that use large amounts of resources

Before submitting a large run of jobs, submit one as a test first to make sure everything works as expected.

Before submitting a very large or very long job submit a short truncated test to ensure that the job starts as expected.

Have a backup plan

Although many HPC systems keep backups, it does not always cover all the file systems available and may only be for disaster recovery purposes (i.e. for restoring the whole file system if lost rather than an individual file or directory you have deleted by mistake). Your data on the system is primarily your responsibility and you should ensure you have secure copies of data that are critical to your work.

Version control systems (such as Git) often have free, cloud-based offerings (e.g. Github, Gitlab) that are generally used for storing source code. Even if you are not writing your own programs, these can be very useful for storing job scripts, analysis scripts and small input files.

For larger amounts of data, you should make sure you have a robust system in place for taking copies of critical data off the HPC system wherever possible to backed-up storage. Tools such as rsync can be very useful for this.

Your access to the shared HPC system will generally be time-limited so you should ensure you have a plan for transferring your data off the system before your access finishes. The time required to transfer large amounts of data should not be underestimated and you should ensure you have planned for this early enough (ideally, before you even start using the system for your research).

In all these cases, the helpdesk of the system you are using should be able to provide useful guidance on your options for data transfer for the volumes of data you will be using.

Your data is your responsibility

Make sure you understand what the backup policy is on the file systems on the system you are using and what implications this has for your work if you lose your data on the system. Plan your backups of critical data and how you will transfer data off the system throughout the project.

Transferring data

As mentioned above, many users run into the challenge of transferring large amounts of data off HPC systems at some point (this is more often in transferring data off than onto systems but the advice below applies in either case). Data transfer speed may be limited by many different factors so the best data transfer mechanism to use depends on the type of data being transferred and where the data is going. Some of the key issues to be aware of are:

As mentioned above, if you have related data that consists of a large number of small files it is strongly recommended to pack the files into a larger archive file for long term storage and transfer. A single large file makes more efficient use of the file system and is easier to move, copy and transfer because significantly fewer meta-data operations are required. Archive files can be created using tools like tar and zip. We have already met tar when we talked about data transfer earlier.

Consider the best way to transfer data

If you are transferring large amounts of data you will need to think about what may affect your transfer performance. It is always useful to run some tests that you can use to extrapolate how long it will take to transfer your data.

Say you have a “data” folder containing 10,000 or so files, a healthy mix of small and large ASCII and binary data. Which of the following would be the best way to transfer them to Myriad?

  1. [user@laptop ~]$ scp -r data yourUsername@myriad.rc.ucl.ac.uk:~/
    
  2. [user@laptop ~]$ rsync -ra data yourUsername@myriad.rc.ucl.ac.uk:~/
    
  3. [user@laptop ~]$ rsync -raz data yourUsername@myriad.rc.ucl.ac.uk:~/
    
  4. [user@laptop ~]$ tar -cvf data.tar data
    [user@laptop ~]$ rsync -raz data.tar yourUsername@myriad.rc.ucl.ac.uk:~/
    
  5. [user@laptop ~]$ tar -cvzf data.tar.gz data
    [user@laptop ~]$ rsync -ra data.tar.gz yourUsername@myriad.rc.ucl.ac.uk:~/
    

Solution

  1. scp will recursively copy the directory. This works, but without compression.
  2. rsync -ra works like scp -r, but preserves file information like creation times. This is marginally better.
  3. rsync -raz adds compression, which will save some bandwidth. If you have a strong CPU at both ends of the line, and you’re on a slow network, this is a good choice.
  4. This command first uses tar to merge everything into a single file, then rsync -z to transfer it with compression. With this large number of files, latency per-file can hamper your transfer, so this is a good idea.
  5. This command uses tar -z to compress the archive, then rsync to transfer it. This may perform similarly to #4, but in most cases (for large datasets), it’s the best combination of high throughput and low latency (making the most of your time and network connection).

Key Points

  • Be careful how you use the login node.

  • Your data on the system is your responsibility.

  • Plan and test large data transfers.

  • It is often best to convert many files to a single archive file before transferring.

  • Again, don’t run stuff on the login node.

  • Don’t be a bad person and run stuff on the login node.