Skip to main content

COMPLECS: Batch Computing: Getting Started with Batch Job Scheduling - Slurm Edition

03/21/24 - 02:00 PM - 03:30 PM EDT

High-performance computing (HPC) systems are specialized resources in use and shared by many researchers across all domains of science, engineering, and beyond. In order to distribute these advanced computing resources in an efficient, fair, and organized way, most of the computational workloads run on these systems are executed as batch jobs, which are simply prescripted sets of commands that are executed on a subset of an HPC system’s compute resources for a given amount of time. Researchers submit these batch jobs as scripts to a batch job scheduler, the software that controls and tracks where and when the batch jobs submitted to the system will eventually be run. However, if this is your first time using an HPC system and interacting with a batch job scheduler like Slurm, then writing and submitting your first batch job scripts to them may be somewhat intimidating due to the inherent complexity of these systems. Moreover, the schedulers can be configured in many different ways and will often have unique features and options that vary from system to system, which you will also need to consider when writing and submitting your batch jobs.

In this second part of our series on Batch Computing, we will introduce you to the concept of a distributed batch job scheduler — what they are, why they exist, and how they work — using the Slurm Workload Manager as our reference implementation and testbed. You will then learn how to write your first job script and submit it to an HPC System running Slurm as its scheduler. We will also discuss the best practices for how to structure your batch job scripts, teach you how to leverage Slurm environment variables, and provide tips on how to request resources from the scheduler to get your work done faster.

To complete the exercises covered in Part II webinar session, you will need access to an HPC system running the Slurm Workload Manager as its batch job scheduler.

Visit SDSC's training and events page for a full list.

----
What is COMPLECS? - COMPLECS (COMPrehensive Learning for end-users to Effectively utilize CyberinfraStructure) is a new SDSC program where training will cover non-programming skills needed to effectively use supercomputers. Topics include parallel computing concepts, Linux tools and bash scripting, security, batch computing, how to get help, data management and interactive computing. Each session offers 1 hour of instruction followed by a 30-minute Q&A. COMPLECS is supported by NSF award 2320934.

---
Marty Kandes
Computational and Data Science Research Specialist, SDSC
Marty Kandes a Computational and Data Science Research Specialist in the High-Performance Computing User Services Group at SDSC. He currently helps manage user support for Comet — SDSC’s largest supercomputer. Marty obtained his Ph.D. in Computational Science in 2015 from the Computational Science Research Center at San Diego State University, where his research focused on studying quantum systems in rotating frames of reference through the use of numerical simulation. He also holds an M.S. in Physics from San Diego State University and B.S. degrees in both Applied Mathematics and Physics from the University of Michigan, Ann Arbor. His current research interests include problems in Bayesian statistics, combinatorial optimization, nonlinear dynamical systems, and numerical partial differential equations.

Contact

events [at] sdsc.edu

Location

This event will be held via Zoom.

Registration

Event Type

Training

Skill Level

Beginner

Event Affiliation

Community