UMBC logo
UMBC High Performance Computing Facility
How to run FORTRAN 90 programs on tara

Introduction

Running FORTRAN 90 programs on the cluster is similar to running any other serial job. Make sure you've read the tutorial for C programs first, to understand the basics.

Serial Example

Let's try to compile this simple Fortran 90 program:
program HelloWorldF90
    write(*,*) "Greetings, denizens of planet Earth!"
end program HelloWorldF90


Download: ../code-2010/hello_serial-f90/hello_serial.f90
To compile this using GCC and create the executable helloworld-f90-gcc, type:
[araim1@tara-fe1 hello_serial-f90]$ gfortran hello_serial.f90 -o hello_serial-f90-gcc
[araim1@tara-fe1 hello_serial-f90]$
Alternatively, to compile using PGI Fortran and create the executable helloworld-f90-pgi, type
[araim1@tara-fe1 hello_serial-f90]$ pgf90 hello_serial.f90 -o hello_serial-f90-pgi
[araim1@tara-fe1 hello_serial-f90]$
Running this program should produce this output:
[araim1@tara-fe1 hello_serial-f90]$ ./helloworld-f90-gcc
Greetings, denizens of planet Earth!
[araim1@tara-fe1 hello_serial-f90]$ ./helloworld-f90-pgi
Greetings, denizens of planet Earth!
[araim1@tara-fe1 hello_serial-f90]$

Parallel Example

We'll assume that you already know the fundamentals of MPI execution from the C tutorial. Now let's create the following program:
program hello_parallel

  ! Include the MPI library definitons:
  include 'mpif.h'

  integer numtasks, rank, ierr, rc, len, i
  character*(MPI_MAX_PROCESSOR_NAME) name

  ! Initialize the MPI library:
  call MPI_INIT(ierr)
  if (ierr .ne. MPI_SUCCESS) then
     print *,'Error starting MPI program. Terminating.'
     call MPI_ABORT(MPI_COMM_WORLD, rc, ierr)
  end if

  ! Get the number of processors this job is using:
  call MPI_COMM_SIZE(MPI_COMM_WORLD, numtasks, ierr)

  ! Get the rank of the processor this thread is running on.  (Each
  ! processor has a unique rank.)
  call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)

  ! Get the name of this processor (usually the hostname)
  call MPI_GET_PROCESSOR_NAME(name, len, ierr)
  if (ierr .ne. MPI_SUCCESS) then
     print *,'Error getting processor name. Terminating.'
     call MPI_ABORT(MPI_COMM_WORLD, rc, ierr)
  end if

  print "('hello_parallel.f: Number of tasks=',I3,' My rank=',I3,' My name=',A,'')",&
       numtasks, rank, trim(name)

  ! Tell the MPI library to release all resources it is using:
  call MPI_FINALIZE(ierr)

end program hello_parallel


Download: ../code-2010/hello_parallel-f90/hello_parallel.f90
Now we can compile the program with
[araim1@tara-fe1 hello_parallel-f90]$ mpif90 hello_parallel.f90 -o hello_parallel
The same compilation command should work even if you've changed the switcher to use a different MPI implementation and compiler. Running the compiled program is now exactly the same as running a C MPI program.