Main Navigation

Secondary Navigation

Page Contents

Contents

The implementation of 'Elwetritsch II' will take place the next weeks.

What will change

New nodes with newest CPU architecture, new enhanced login and VGL nodes, new GPUs for machine learning and a larger filesystem for /scratch showing much better performance are the highlights (see here for details). Current nodes will be adopted to this new part by and by.

17.07.18

Elwe1 is operational again. Elwe1 and elwe3 are the next nodes to be replaced. They won't be available on Tuesday starting at 07:00.

27.06.18

Changes are in effect. All head nodes provide now only /scratch in the new parallel filesystem. Thus it seems to be empty. Access to the old parallel filesystem is only available in the interactive partition.

News (June 2018)

The current operational parallel filesystem has to be closed down as soon as possible. Please inspect your files on /scratch and select files you continue to need. RHRK does neither copy or save any data on /scratch

Starting on Wednesday 27th the following measures will be taken:

  • All current nodes will be closed. No new jobs will start to prevent them from using the current parallel filesystem. Currently running jobs are kept running to finish their work.
  • You may selectively copy data from current /scratch to the new parallel filesystem:
    1. Starting a batch job on an interactive node with
      salloc -p interactive
      in a command window will open a shell for you.
    2. Change your working directory
      cd $OLDSCRATCH
    3. Move your data selectively to the new parallel filesystem
      mv my_file $SCRATCH
      Alternatively you may even copy selectively directories
      cp -rp my_directory $SCRATCH
      followed by a remove of the copied data
      rm -r my_directory
    Be aware: Running jobs on old nodes will modify data on /scratch.
  • Idle old nodes will be opened after re-installation with access to the new parallel filesystem. When all jobs are finished we return to normal operation again.
  • If you have copied required data to the new parallel filesystem you may release pending jobs (now using the new /scratch):
    scontrol release <jobid>
Some of the new nodes will be opened for testing. These nodes will use the new /scratch and won't be able to access the old /scratch. These nodes are available in the partition skylake:
#SBATCH -p skylake
Please use these nodes to test whether your software is still running on these nodes.

Head nodes

All login nodes but elwe4 will be replaced by new ones with the new generation of CPUs. We will announce this one day in advance. Only one login node after another will be replaced.
finished tasks

Elwe2 and elwe4 have been replaced. The K80 GPUs will be made available afterwards.