<Need to edit this page to suit FAU clusters>
Follow the following procedures to setup an account for PSC Bridges-2
Start by creating an account on the Xsede portal
Click on “Create account” and follow the instructions.
2. Later on, you will receive allocation to this account, you should receive an email with your PSC username
3. Setup DAU authentication, under My XSEDE->profile-> DAU. Follow the procedure to add your mobile number and DAU application on your mobile.
4. Login to the XSEDE login hub with your XSEDE user and password, and DAU 2 step authentication.
$ ssh ophirmaor@login.xsede.org Warning: Permanently added 'login.xsede.org,149.165.168.51' (ECDSA) to the list of known hosts. Please login to this system using your XSEDE username and password: password: Duo two-factor login for ophirmaor Enter a passcode or select one of the following options: 1. Duo Push to XXX-XXX-7586 2. Phone call to XXX-XXX-7586 Passcode or option (1-2): 1 Success. Logging you in... Creating directory '/home/ophirmaor'. Last failed login: Mon Nov 15 19:43:39 EST 2021 from 209.116.155.178 on ssh:notty There were 7 failed login attempts since the last successful login. # Welcome to the XSEDE Single Sign-On (SSO) Hub! # # This system is for use by authorized users only, and is subject to the XSEDE # Acceptable Use Policy, described at https://www.xsede.org/usage-policies. # All activities on this system may be monitored and logged. # # Your storage on this system is limited to 100MB. Backup is not provided. # # From this system, you may login to other XSEDE system login hosts on which # you currently have an active account. To see a list of your accounts, visit: # https://portal.xsede.org/group/xup/accounts # # To login to an XSEDE system login host, enter: gsissh <login-host> # where <login-host> is the hostname, alias or IP address of the login host. # The following default gsissh host aliases have been defined: # # anvil bridges bridges2 comet comet-gpu darwin # expanse kyric mcc osg rmacc-summit stampede2 # # For example, to login to the Comet system at SDSC, enter: gsissh comet # # E-mail help@xsede.org if you require assistance in the use of this system. [ophirmaor@ssohub ~]$
5. Login using gsissh to the bridges2 login node.
[ophirmaor@ssohub ~]$ gsissh bridges2 ********************************* W A R N I N G ******************************** You have connected to br014.ib.bridges2.psc.edu, a login node of Bridges 2. This computing resource is the property of the Pittsburgh Supercomputing Center. It is for authorized use only. By using this system, all users acknowledge notice of, and agree to comply with, PSC polices including the Resource Use Policy, available at http://www.psc.edu/index.php/policies. Unauthorized or improper use of this system may result in administrative disciplinary action, civil charges/criminal penalties, and/or other sanctions as set forth in PSC policies. By continuing to use this system you indicate your awareness of and consent to these terms and conditions of use. LOG OFF IMMEDIATELY if you do not agree to the conditions stated in this warning ********************************* W A R N I N G ******************************** For documentation on Bridges 2, please see www.psc.edu/resources/bridges-2/user-guide/ Please contact help@psc.edu with any comments/concerns. Projects ------------------------------------------------------------ Project: cis210088p PI: Shawn Brown ***** default charging project ***** GPU 10,000 SU remain of 10,000 SU active: Yes Regular Memory 100,000 SU remain of 100,000 SU active: Yes Ocean /ocean/projects/cis210088p 36k used of 9.766T [maor@bridges2-login014 ~]$
Another option is to use the PSC username and password to login from any machine
$ ssh -l maor bridges2.psc.edu maor@bridges2.psc.edu's password: ********************************* W A R N I N G ******************************** You have connected to br013.ib.bridges2.psc.edu, a login node of Bridges 2. This computing resource is the property of the Pittsburgh Supercomputing Center. It is for authorized use only. By using this system, all users acknowledge notice of, and agree to comply with, PSC polices including the Resource Use Policy, available at http://www.psc.edu/index.php/policies. Unauthorized or improper use of this system may result in administrative disciplinary action, civil charges/criminal penalties, and/or other sanctions as set forth in PSC policies. By continuing to use this system you indicate your awareness of and consent to these terms and conditions of use. LOG OFF IMMEDIATELY if you do not agree to the conditions stated in this warning ********************************* W A R N I N G ******************************** For documentation on Bridges 2, please see www.psc.edu/resources/bridges-2/user-guide/ Please contact help@psc.edu with any comments/concerns. Last login: Tue Nov 16 14:04:33 2021 from 209.116.155.178 Projects ------------------------------------------------------------ Project: cis210088p PI: Shawn Brown ***** default charging project ***** GPU 10,000 SU remain of 10,000 SU active: Yes Regular Memory 100,000 SU remain of 100,000 SU active: Yes Ocean /ocean/projects/cis210088p 36k used of 9.766T [maor@bridges2-login013 ~]$
6. Use slurm to allocate a node
[maor@bridges2-login014 ~]$ sinfo PARTITION AVAIL TIMELIMIT NODES STATE NODELIST RM* up infinite 1 down* r059 RM* up infinite 1 drng r384 RM* up infinite 1 drain r191 RM* up infinite 2 resv r[099,174] RM* up infinite 34 mix r[053-055,057-058,063,068,071,073-075,077,080-081,084-085,090,092,094,109,126-127,129,135,137,143,151,157,161,172-173,175,190,282] RM* up infinite 393 alloc r[050-052,056,060-062,064-067,069-070,072,076,078-079,082-083,086-089,091,093,095-098,100-108,110-125,128,130-134,136,138-142,144-150,152-156,158-160,162-171,176-189,192-246,248-251,253-272,274-281,283-360,362-376,378-383,386-446,448-488] RM* up infinite 2 idle r[247,377] RM* up infinite 5 down r[252,273,361,385,447] RM-512 up infinite 16 alloc l[001-016] RM-shared up infinite 1 down* r059 RM-shared up infinite 2 resv r[099,174] RM-shared up infinite 70 mix r[005-008,010-013,018,020-025,027-029,031-037,039-049,053-055,057-058,063,068,071,073-075,077,080-081,084-085,090,092,094,109,126-127,129,135,137,143,151,157,161,172-173,175,190,282] RM-shared up infinite 158 alloc r[009,014-017,019,026,030,038,050-052,056,060-062,064-067,069-070,072,076,078-079,082-083,086-089,091,093,095-098,100-108,110-125,128,130-134,136,138-142,144-150,152-156,158-160,162-171,176,180-181,192-196,199-204,208,223,227-229,245-246,250,257-258,260,263-264,285,289-291,293,295,301,310,319,323-325,328-330,332-333,336-337,342,344-345,347,353-354,356-360,367] RM-small up infinite 2 mix r[001-002] RM-small up infinite 2 idle r[003-004] GPU up infinite 18 mix v[003-006,008-012,015-016,018,025,027-030,033] GPU up infinite 13 alloc v[007,013-014,017,019-024,026,031-032] GPU up infinite 1 idle v034 GPU-shared up infinite 18 mix v[003-006,008-012,015-016,018,025,027-030,033] GPU-shared up infinite 13 alloc v[007,013-014,017,019-024,026,031-032] GPU-small up infinite 1 alloc v002 GPU-small up infinite 1 idle v001 EM up infinite 1 mix e001 EM up infinite 3 alloc e[002-004] BatComputer up infinite 1 mix dv001 BatComputer up infinite 3 alloc dv[002-004]
7. Allocate a node.
Note: For ISC23 use only RM partition and 4 node allocation.
Here is an example to allocate single node.
[maor@bridges2-login014 ~]$ salloc -N 1 -p RM salloc: Pending job allocation 5062923 salloc: job 5062923 queued and waiting for resources salloc: job 5062923 has been allocated resources salloc: Granted job allocation 5062923 salloc: Waiting for resource configuration salloc: Nodes r352 are ready for job [maor@r352 ~]$ ibstat CA 'mlx5_0' CA type: MT4123 Number of ports: 1 Firmware version: 20.30.1004 Hardware version: 0 Node GUID: 0x9440c9ffffac407c System image GUID: 0x9440c9ffffac407c Port 1: State: Active Physical state: LinkUp Rate: 200 Base lid: 636 LMC: 0 SM lid: 35 Capability mask: 0x2651e848 Port GUID: 0x9440c9ffffac407c Link layer: InfiniBand [maor@r352 ~]$ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 128 On-line CPU(s) list: 0-127 Thread(s) per core: 1 Core(s) per socket: 64 Socket(s): 2 NUMA node(s): 2 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC 7742 64-Core Processor Stepping: 0 CPU MHz: 3335.192 BogoMIPS: 4491.35 Virtualization: AMD-V L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-63 NUMA node1 CPU(s): 64-127 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca [maor@r352 ~]$
0 Comments