[print]

Frequently Asked Questions on ARPS


Table of Content

  1. Source Code and Documentation
  2. Compiling and Linking ARPS
  3. Compiling and Running Parallel (MPP) Version of ARPS
  4. Running Shared Memory Version of ARPS
  5. ARPS Initialization
  6. Terrain and Surface Characteristics Data
  7. Graphic Plotting and Visualization
  8. ZXPLOT Graphics Package
  9. Output Data Format
  10. Code Performance and Platform Dependency
  11. Model Physics

Source Code and Documentation
Back to top of page

Q: How do I obtain ARPS code and documentation?

A: See section 3.2 of ARPS 4.0 User's Guide. The code and documents can also by accessed from the ARPS World Wide Web (WWW) page at http://www.caps.ou.edu/ARPS. The current officially released version is ARPS Version 4.0. If you have problem connecting to the site or finding the files, e-mail arpsuser@ou.edu.

Q: We obtained the officially released ARPS Version 4.0. Does this version have NESTING capability? If so, what information (input parameters) do we specify in the input file in order to do a nested simulation. If not, is it possible to send us a version that includes grid nesting?

A:
Our nesting capability is built on top of an adaptive grid refinement interface. The interface wraps around the ARPS grid solver and controls the time integration of the entire system.

The interface worked with Version 3.3, but has yet to be updated to work with version 4.0. With the version you have, it's not possible to turn on grid nesting. We will try to update the interface as soon as we can.

Q: I am beginning to work with ARPS 4.0 and I would like to acquire the Adaptive Grid Refinement code.

A: This capability is now available starting ARPS 4.3.0. You can get it from our ftp server (ftp://ftp.caps.ou.edu/ARPS).

Back to top of page

Compilation and Linking
Back to top of page

Q: When I typed the command makearps arps40, I got a message saying that the compilation flags were unknown. What could be the problem?

A:
makearps tries to use suitable compilation options by determining the system type as one of IBM RS/6000, Cray, Sun, DEC Alpha or an undetermined UNIX system. It's possible that the compilation options prespecified in makearps are not compatible with the system version that you are using. In this case, you need to modify the options by editing makearps. For more information on the makefiles, see Chapter 3.

Q: I did makearps arps40, and everything seemed OK, but the executable code arps40 does not work even if I repeat the makearps command. What is going on?

A:
Some systems keep the object code of a source file even if the compilation aborts. In this case, make may treat the object code as up to date based on its modification date. The executable file thus produced will be invalid. Check the previous compilation messages to see if any files need to be recompiled. You can recompile the troublesome files or remove all the object codes and re-do makearps.

If different computers share the same file system, the object code produced on one system may not be compatible with the one generated on another. In this case, you need to delete all *.o files and re-do makearps.

Q: We tried to compile ARPS on the Cray using the command 'make arps40.cray', but failed. What are we missing?


A: With ARPS 4.0, shell script .i.makearps; is used to control all the compilations. The proper command is makearps + arguments. You can get a list of commands by entering makearps without any arguments.

To compile and link ARPS40, do makearps arps40. You can omit the machine type parameter.

Q: We had problems getting the Cray version of ARPS to read a terrain data file that we created using our own program. ARPS ends abnormally when trying to read the first record in the file although it opens the file fine. The write statements we used to create the file appear to be identical to those in arpstern12.f. ARPSTERN. What's happening?

A:
ARPS uses the IEEE binary format (the one used by most 32 bit Unix workstations) rather than the Cray native binary format for the terrain file (and other binary input and output files). This is achieved by calling two Cray routines

CALL asnctl ('NEWLOCAL', 1, ierr)
CALL asnfile(tern_file,'-F f77 -N ieee', ierr)

before the open statement for file tern_file. The opened file is then assumed to be in the IEEE format. You need to add these two statement before the open statement of your program that creates the terrain data, so that the generated data are written in the IEEE format.

Q. In trying to run makearps arps on an IBM RICS/6000 system, I got the following: ...

Making arps40 for rs6000
Compilation options: Optimization level: 1
... make -f make.arps 'FFLAGS=-0 ... xlf -O -NA16384 -ND6912 -c arps40.f
stty: TXGETLD: Not a typewriter The error code from the last failed command is 1.
Make Quitting.

A: It seems that there is a `stty' or a `tset' command in your .cshrc or .tcshrc. Comment that line out, or move it to your .login file (where it rightfully should be) and you will be fine. (Both those files are in your home directory).

Q: Questions concerning using ARPS to simulate stratocumulus clouds in the boundary layer:

1. Can we use arps as an LES without violating the sub-grid scale paramaterization,
2. Will arps be able to give us LES type resolution (approx. 10m).
3. Does ARPS handle radiation-cloud interaction.

A: 1. You can run the ARPS as an LES model - several of our students have done just that. We have the Smagorinsky, 1.5-order TKE, and Germano dynamic subgrid closure models available. Note that this is not DNS, but rather LES. The model can be run at any resolution you like - we have taken it down to tens of meters.

2. Yes, you can run at any resolution you like.

3. At the moment, the ARPS radiation package does not include cloud interactions. However, at this very moment, we are changing the code to incorporate this feature via the radiation physics module developed at NASA Goddard (Wei-Kuo Tao's group).

Q: After running makearps arpspltncar, this is what happens: By the way, I am using an SGI.

Any clues?

A: To run arpsplt you need to install ZXPLOT on your system. It is available in binary format at our ftp site. I noticed that in your case makearps didn't use zxncarf77 as loader to link the objective files. That meant even if you had ZXPLOT installed, make would not use it. You need to add a few lines to your makearps file right after line 940 (for version 4.2.1).

set LDR_str = '"LD=f77 -ignore_unresolved"' # line 940
if ($CMD == arpspltncar || $CMD == arpspltmax) then # new line
  set LDR_str = '"LD = zxncarf77 -ignore_unresolved"' # new line
endif # new line

Q: I am curious whether ARPS can be run on a PC workstation (Pentium Pro, Windows NT, gobs of RAM, Microsoft Powerstation).

If you don't think ARPS could work on a PC platform, what is the most inexpensive configuration you could suggest?

A: ARPS can be run on a PC under Linux (a free version of UNIX system) without any modification. There are a few groups that are actually running on a PC with Microsoft Powerstation. Under a Unix (or alike) system, the compilation and linking is automatically handled by a shell script (makearps) and the unix make utility. The function of the former is mainly to automatically detect the platform type). To compile it under NT, you need to know what files to use and link.

We found that a Pentium Pro is as powerful as a lower end workstation.

Q: I would like to compile and run the ARPS on a PC. There is a way to do it?

A: We run ARPS routinely on IBM compatible PC's running Linux operational system. Linux is a free version of Unix, and a version of which is available from http://www.redhat.com. Under Linux, everything is the same as though you are using a Unix workstation. And the ARPS makearps script automatically detects the type of system you use.

If you are running a version of Windows, with a Fortran compiler (e.g. Microsoft Powerstation), then you need to figure out what files to link to build the executables for ARPS and its supporting programs.

On a unix system, the compilation and linking are handled by our makearps script and make configuration file makefile.arps. The easiest way to figure out what files to use  is to compile and link arps on a unix platform, and see what files are used. If a Unix platform is not available, you have to look inside makefile.arps to see what files are linked for each program.

To use ARPSPLT under Windows, you need to obtain an object code library of ZXPLOT (http://www.caps.ou.edu/ZXPLOT) compatible with your compiler. Special arrangement is often needed.

Q: I downloaded the latest version of ARPS, and try to run it on and IBM RS/6000 under AIX 4.1.4. The compilation went fine and when I ran ARPS, I kept getting the same error message related to namelist input. The log file seems to be fine, however. Any suggestions?

A: The namelist format of the xlf compile in the new version of AIX (4.0 and later) uses a slightly different format which causes namelist format read errors. You need to set:
setenv XLFRTEOPTS namelist=old
once before running ARPS. You can added the line in your .login file.

Q: I have a question concerning the -opt compile option. What is the string attached to optimizing the code, i.e., why isn't it always optimized? Furthermore, is there a major run time speed difference between using the 1, 2, 3 settings for that option?

A: This is compiler dependent. You should consult your compiler man page or manual and perhaps do some benchmark testing to find out the most optimal compiler options (and plug into makearps). For most systems, -opt 3 uses -O3 and gives decent optimization. The code generally runs much slower when -opt 1 is used.

Back to top of page


Compiling and Running Parallel Version of ARPS
Back to top of page

Q: We are planning to use ARPS for regional high resolution meteorological modeling and will like to get more information about various parallel versions of APRS.

We have a computer facility which includes a CM-5 (64 nodes), a IBM SP-2 (8 processors) and a SGI Power Challenge (2 processors, R8000). We will like to get some technical advice on which computer platforms (CM-5, IBM SP-2 or SGI) we shall run ARPS (parallel version) on.

A: The ARPS distribution includes a message passing version of the code, with special hooks for running it on the SP-2 under MPI. We have not tested ARPS 4 on the CM-5, but I don't foresee any problems, if your machine supports PVM and/or MPI. The code has been tested on the SGI power series, both under MPI and with the automatic parallelizing compiler. The make script will select the correct type based on user specifications. In summary, you should be in a position to run it on all 3 machines. The Power Challenge may be easier to use with the automatic parallelization, but the SP-2 may be more powerful especially if it has many processors.

Q: I am interested in running the parallel version of ARPS on a 4 processor SGI Challenge L 4XR4400.

The example in the users guide described compiling and running the parallel version of the ARPS on the Cray T3D. Is this a fairly standard example of how the parallel version of ARPS is compiled and run? I have compiled ARPS using the makearps -p option and made what I think are the appropriate changes to the par.inc file. This generated a working executable which appeared to only utilize one processor. Do I also need to split the input files up and change the values in the dims.inc file to the size of the split files? I was not sure if this was only the case for the T3D, or for all cases in which the model is run in parallel.

A: The `using the parallel version of the ARPS' describes steps to use the code in a distributed memory environment, using PVM/MPI. To create & compile the parallel version of the ARPS code on the SGI you will have to run

However, I feel that you might be better off using the native parallel fortran compiler (pfa). `pfa' will create parallel shared memory code which is capable of running in parallel on your SGI, and it provides a sequential interface (i.e. you will run the code as you would on a single processor machine). You can set an environment variable to decide how many processors you want at run-time. Command setenv SET_MP_NUMTHREADS X will start X # of threads (processes).

The SGI native pfa compiler is supported by makearps script.

Q: I'm trying to determine if/how I can set up ARPS to run in its parallel version.

A: To run the ARPS on a SGI Power Challenge use the -p option in makearps: makearps -p arps

This will compile the arps with the auto-parallization available on the SGI.

The document which describes how to use the parallel version of the ARPS is available as a postscript file at the URL:
http://wwwcaps.ou.edu/ARPS/using_par_arps.ps
(see also http://wwwcaps.ou.edu/ARPS/ARPS4.guide.html).

Please note, however, that this describes how to use the distributed memory, message passing version of the ARPS, *not* the one you would use on the SGI.

Q: I shall greatly appreciate more info or guide on the use of arps on a cluster of workstations using MPI In particular: 1. What is the use of the executable 'mpitrans' that is generated when I make arps_mpi 2. How to use 'splitfiles'

A: 1. Program mpitrans is used to translate regular arps source code into parallel version. Makearps and makefile.arps should be able to handle the translation automatically;

2. If you want to use any data sets, such as terrain, surface property, 3-d initial conditions, soil model initial conditions, and/or external boundary conditions, you will have to split the data sets for each individual PE before running parallel ARPS. To run splitfiles, use the same input you prepared for ARPS, arps.input. However, to run parallel ARPS, say arps_pvm, the input file should contain only the arps.input file name. For example,

arps_pvm [-options] < inputfile > outputfile

where inputfile contains only one line which is the filename of arps.input. http://wwwcaps.ou.edu/ARPS/using_par_arps.ps is a PS file containing additional instructions.

Q: We are applying AR using a multiprocessor as the host. I have run into two problems:

1. I don't understand why each host cannot read in their own input files using the existing namelist facility in ARPS.

2. If indeed one must use readinput.f when multiprocessing, we seem to be missing a few lines of code. Our readinput.f file ends like this:

There is at least an END line missing but surely there are a few more macro calls for other "namelist" items ?

A: 1. The reason the message passing versions of arps do not use namelist facilities is because the T3D (which used to be the primary message passing version of the arps) does not support namelist. If you wish to use namelist you will need to play around with how the message passing version of initpara3d is made.

2. readinput.F is not the completed version of the readinput subroutines (that is why I used a .F instead of a .f). A section from makefile.arps illustrates what happens:

readinput_pvm.f : readinput.F awkinput globcst.inc bndry.inc exbc.inc cp readinput.F readinput.scratch cat globcst.inc bndry.inc exbc.inc \ | $(AWK) -f awkinput >> readinput.scratch m4 readinput.scratch > readinput_pvm.f

First the major include files for the arps are processed by awk to get out all the variables that might be input into the model and these are appended to readinput in a special format. This file is then processed by m4 to produce the actual fortran code used by the message passing version.

Another note: when running the message passing version of the arps, the readinput routines issue a warning for any variables that they come across for which they do not know how to handle. Because the end of the arps input files have a myriad of variables not used by arps, but rather by auxiliary programs, ones tends to get a lot of warnings which can be ignored.

Back to top of page


Running Shared Memory Version of ARPS
Back to top of page

Q: The model stopped due to a floating point error in a microphysics routine, but I could not find anything wrong there.

A:
The model can become unstable due to problems outside the microphyscis. Instabilities are often caught in the exponential calculations inside the microphysics. Check your model setup and input control parameters carefully.

Q: The model stopped before the specified stop time was reached. What could be the cause?

A:
Most likely your instability of time integration;time integration was unstable. ARPS checks the velocity field for stability. If the maximum wind speed exceeds 100 m/s, the model will stop and issue a message. It will also make a history data dump for that time. It is also possible that improper input parameters were specified. Check your output file for the input parameter settings and the model run information.

When the integration is unstable, you should examine the model fields to determine the nature of this instability. Unacceptably large time step sizes, improper mixing coefficients and boundary problems are the most common causes of instability. When the model becomes unstable after only a few large time steps, more probably dismal is too large. Too large computational mixing coefficients can also cause instability.

Q: I re-ran the same executable file, arps40, that worked before and got an I/O error message. What could have happened?

A:
Check your disk space to see if the disk is full. ARPS can produce a huge amount of data. When the disk is full, the model will fail. You also need to make sure the you have the permission to write into the output directory, dirname) which is specified in arps40.input.

Q: I did makearps arps40, and everything seemed OK, but the executable code arps40 does not work even if I repeat the makearps command. What is going on?

A: Some systems keep the object code of a source file even if the compilation aborts. In this case, make may treat the object code as up to date based on its modification date. The executable file thus produced will be invalid. Check the previous compilation messages to see if any files need to be recompiled. You can recompile the troublesome files or remove all the object codes and re-do makearps.

If different computers share the same file system, the object code produced on one system may not be compatible with the one generated on another. In this case, you need to delete all *.o files and re-do makearps.

Q: We ran the Del City storm case and compared the results to those discussed in the ARPS 3.0 Users Guide. We used the same grid setup, and we set all of the model parameters as listed in the manual. However, Wmax of the storm that we simulated was only 29 m/s after 1 hour, whereas Wmax of the storm discussed in the manual was around 45 m/s. All of the other features of our simulation agree well with what is described.

A:
The version that produced the w plot in the 3.0 Guide was in error. The water loading was effectively turned off during that run, therefore w was too large. The results with the current version should be correct.

Q: In attempting to run ARPS, I got the following error near the beginning of the run:

Data dump of grid and base state arrays into file .//raymer.hdfgrdbas You are calling a dummy function. No HDF data is dumped. The model stopped in HDFDUMP.

A: The message says that you were trying to dump history file in HDF format, while the executable code was linked with a dummy subroutine. Since HDF is not a default format for makearps compiling options, you need to spell it explicitly. Try the following,

% makearps -io hdf arps

And then re-run the model. The same is true for several other formats, including Vis5D. Note that we stopped supporting NetCDF format for lack of use.

Q: I was looking through the manual on running ARPS and did not find directions on running the model in two dimensions. How do I go about setting-up a 2-d run?

A: See page 38 of the user's guide, option "runmod." Note that other statements about dimensionality are found in nx, ny, nz and the boundary condition parameters. The latter are particularly important when running in 2-D...be sure they're set correctly.

Q: I am using ARPS to model mountain lee waves and I wish to use the nesting capabilities of the model to investigate upwind flow reversal and lee vorticities Could you please send me the AGRI source code and any information on how to use it and also any other hints on nesting (including One Way Interactive Self-Nesting).

A: The two-way interactively nesting did not become available to general users until version 4.3.0.

As to the one way interactive nesting, it is available with earlier versions.

To do that, you need to use ARPSR2H to interpolate the coarse grid history data to a fine grid at both the initial time and the times for boundary forcing.

ARPSR2H calls the ARPS initialization routine to initialize a coarse grid (in exactly the same way as the ARPS is initialized using external data), then interpolates the fields to a fine grid that is defined by parameters in namelist block &gridinit in arps40.input.

ARPSR2H should be run for each time that the boundary forcing file is needed. By setting exbcdmp=1 in arps40.input, a EXBC file is produced at the same time a history file in produced.

With the EXBC files prepared, you then run the fine grid using externally forced boundary option.

The dimension of the fine grid is set in r2hdims.inc. You can only do two levels of one-way nesting at a time.

Since version 4.3.0, a new program called arpsintrp is written that performs a few tasks including that of ARPSR2H. It can process data at several times at once.

Q: In the ARPS 4.0 Users Guide it's noted that the PBL depth is calculated via the RIchardson number and a stability dependent rate equation. However, in sfcphy3d.f it seems that the PBL top is assumed to be where Tv is larger than Tv at the surface The friction velocity, etc., don't seem to be used as noted in the Users Guide. Is this correct?

A: The option for predicting PBL depth is not implemented in the official version, since it does not necessarily work better than diagnosed height.

You are correct about the current way of determining PBL depth (based theta profile).

Q: I am trying to use the radiation upper boundary conditions. But I got error MSG which is" stop set 99", although nx is set to odd and ny=4. I have also checked bc3d.f, where I can not find the upper radiation BC when tbc=4 although there is an option in the arps.input file.

A: The "stop set99" message is generated in a fft initialization package. You need to follow the input file instructions for setting nx. The formula for determining a nx which is compatible with the fft program and upper boundary condition is given in the input file tbc explanation section. Use

nx-1 = 2**P * 3**Q * 5**R ny-1 = 2**P * 3**Q * 5**R where P .GE. 1 , Q .GE. 0 , and R .GE. 0 . (nx,ny must be odd!) to determine nx.

Q: I need a very simple version of the model to use in classroom instruction. Do you have a simpler version (1.0, 2.0 or so) of ARPS?

A: ARPS is written in such a way that it can be used as a straightforward simple atmospheric model for idealized cases (warm-bubble released near surface to induce convection, flow over a "gaussian" mountain, etc.) or for more complicated and realistic cases (ingest of observations to predict extratropical cyclone development).

I would not suggest using older versions, because they are no longer maintained and could possibly have contained coding errors or basic problems that we have since corrected in our updated version. By using the current version (ARPS4.2.4), you can still achieve a basic simplicity by turning off various model capabilities, such as not including moist or surface physics processes, using a single-sounding to initialize the model, not including terrain data, etc. Please reference the "benchmark" cases in the User's Guide p. 295, noting especially the "Del City supercell" case.

Q: I need to to specify one lateral boundary condition (wbc) as externally force condition and others as internally determined condition; however, the ARPS has no such mixed condition options. If I tried to modify the original programs, which subroutines I should modify.

A: It appears that you are using a relatively simply setup - i.e., you do not use 3-d data set to initialize the model.

You are correct that ARPS can not mix external bc with other lbc options, since the exbc option is designed to work with a set of exbc provided by another model or ARPS itself.

In you case, you should choose lbcopt=1 and modify the code for one boundary. The easiest thing to do is to reset the boundary values for u,v,w,ptprt,pprt, qv, and other q's whenever bcu, bcv,lbcw,bcp,bcpt,bcq and bcqv are called. grep -i 'call bc' *.f and grep -i 'call lbc' *.f will list all the occurrences. Most of the calls are in solve3d.f.

Q: I'm dumping a history file in GrADS format. It appears the pressure values in the variable "p" in the history file are defined at the wind vector points (i.e. the coordinates in "zp"). Is this correct?

A: 1. In GrADS format, all variables in the history data are all dumped at scalar points. That means, all wind components, as well as zp, have been interpolated from their own grid points to scalar points. Pressure p is defined at scalar points, so no interpolation needs to be done.

Q: Where are the scalar quantities (i.e. potential temperature etc.) in the history file evaluated? At the cell centers or have they been interpolated to the z levels?

A: A scalar point refers to the center of a grid cell.

Q: Are potential temperatures referenced to the reference pressure "p0" defined in the file phsycst.inc or are they referenced to the true surface pressure at their location?

A: Potential temperature pt is referenced to "p0" defined in phycst.inc.

Q: Are the history dump values evaluated at indices z=1 and z=nz meaningful?

A: GrADS history dump values are calculated at scalar points as I mentioned in 1. Therefore, the values at index x=nx, y=ny, or z=nz are not meaningful in terms of evaluation using GrADS. However, those values are needed to restore the original values at staggering points. ARPS history read routine knows how to get the vector values back from scalar points to the vector points.

Q: The terms "computational space" and "physical space" are frequently used in the source code? Can you say precisely what is meant by each term?

A: ARPS equations are written in a curvilinear coordinate system, we call it computational space, defined by

In horizontal, computational coordinates are the same as physical coordinates since we do not perform any coordinate transformation in x and y directions. However, in the vertical, the computational coordinates are defined as a function of physical coordinates, (x,y,z). ARPS domain is grided on coordinates, x(i), y(j), and z(k), with a constant increments dx, dy, and dz. That's computational space. The ground surface z=0 is at k=2. The physical heights are represented by zp(i,j,k). If there is no terrain and if the vertical coordinate is not stretched, zp(i,j,k)=z(k).

Q: Computational mixing coefficient is determined in arps.input But, In source code, computational mixing term has diffusion multiplied by coefficient which is dependent on horizontal or vertical spatial size.

A: When We simulate same situation except for horizontal grid size, Should we modify coefficients in arps.input considering horizontal grid size? Which is the important coefficient, coefficient in arps.input or coefficient multiflied by horizontal grid scale?

The dimensional mixing coefficient (the one actually used in the model equations) is dependent on the grid size in order to achieve similar amount of mixing on 2 delta waves. In arps.input, we specify the coefficients that have a dimension of 1/s. This way, we do not need to change the value much when we change the resolution.

Back to top of page


ARPS Initialization
Back to top of page

Q: Is it possible to initialize ARPS with a full 3-D data set yet?

A:
You can initialize ARPS using an external 3-D data set with initialization option initopt = 3. The format of the initial data set is exactly the same as the history data. A sample program, EXT2ARPS, is provided that interpolates an external data set to the ARPS grid and writes it in the history data format. A user is required to provide his/her own subroutine to read the external data (see rdextfil.f).

In ARPS, the base-state arrays are horizontally homogeneous, but the total time-dependent arrays are fully three dimensional. It is the total fields that you are initializing. Typically the base-state arrays are assigned with some kind of horizontally averaged values and this average should be taken at constant height levels, rather than along the coordinate surfaces.

Q: Is it possible to initialize the ARPS model with gridded data. (ie. an initial field from the RUC, ETA, or NGM) Or any other gridded data forms.

A: Yes. ARPS provides a utility called ext2arps, which reads a gridded external data set, interpolate it to the ARPS grid, then write it out in the standard ARPS history dump format that can be used as the initial condition of ARPS.

Currently RUC data on its original hybrid coordinate in GRIB format and pressure-surface data in GEMPAK format, as well as ETA model data in GRIB format are supported by EXT2ARPS. Given that the GIRB reader used in EXT2ARPS is general (obtained from NCEP), other gridded data in GRIB format can be fed into EXT2ARPS without much difficulty. Certain customization is usually needed to map the arrays on a new grid to the ARPS grid.

Past versions of EXT2ARPS also supported RUC and LAPS data sets in NetCDF format.

Q: Does ARPS have an objective analysis package?

A: Yes. It's called ARPS Data Analysis System (ADAS) and is included in the ARPS distribution package. It's a sophisticated package and is closed coupled with the forward model and can be at same time run as an stand alone system. ADAS real time analysis page for the southern great plain areas can be founded at http://www.caps.ou.edu/ADAS and/or http://throttle.ou.edu. A supplemental chapter describing ADAS can be found at http://www.caps.ou.edu/ARPS.

Q. Can someone tell me about the surface data file produced and how do I look at it? The code says it is in GrADS format - does this mean I need to get hold of a copy of GrADS before I can look at it or can the file be converted to other formats?

A: The surface data file produced by subroutine SOILDIAG and WRTFLX are in GrADS format. If you have GrADS installed you may display the output. In fact, the GrADS format surface data is a kind of sequential series of 'surface history dumps'. If you don't have GrADS installed, you may convert the surface data to any other format convenient to you. Please take a look at subroutine WRTFLX in file soildiag3d.f to see how the data are written. There is no time information included in the data file, but you may find the GrADS control file which contains the grid and time information. The data control file should have a name like "runname.ctl" or "runname.ctl.??", where runname is the name that you give to the run through arps.input and ?? is the version number.

The regular soil model fields and the surface characteristics arrays are written into the ARPS history dumps if sfcout is set to 1 in arps.input. If so, these arrays can be plotted in ARPSPLT.

Q. I've got GrADS working - it was simple enough (no script file needed!). However I've noticed that the XDEF and YDEF are output in metres, whereas GrADS seems to expect them to be lat-lon (and there's no other way of telling it where on earth it is)- the net result of this last one was that I had surface fluxes for Nauru (small island, South Pacific) plotted over a map of north Africa, Greece, Spain and Turkey!!

A: The GrADS supposes the data grid is lat/lon, while the ARPS grid is, at the most, the map projected x-y. (If your GrADS is the version 1.5 or newer, it may recognizes the Lambert projection data, in which case you may display the data on a right map.)

Q: I have a couple of questions on using external boundary conditions. In your example you specify one boundary conditions file

and one initial condition file

So, the boundaries will be updated by interpolation between these two times 18 and 21. What happens when arps is to be run for 6 or more hours. What I am trying to get at is how to specify the name of the files when boundary updating is done, say, every 3 hours for a 12 hour forecast run. In the above example only a 3 hour run can be made. Where do you specify multiple boundary file names. Have you considered a long run with boundary updating every 3 hours or so? The boundary information could come from any available large scale model like ETA or MAPS.

A: Namelist &exbcpara is used to run ARPS model. The parameters listed above mean that the EXBC files have name of may24c.19950524.hhmmss where hhmmss represents hours, minutes and seconds of the EXBC file. The EXBC files starts from 21:00:00 and ARPS model will automatically look for next file every 10800 seconds (3 hrs). The parameter tintvebd stands for "Time Interval of External Boundary Data".

Namelist &extdfile is NOT used to run ARPS model. It is only for program EXT2ARPS that converts external data to ARPS grid so that the data can be used as ARPS model initial and external boundary conditions.

When ARPS runs for more than 3 hours, it will look for the next file at 6 hours, 12 hours, 18 hours, ..., so on so forth, until the run is terminated. If you try to run ARPS with EXBC for 6 hours while only EXBC file exists at 3 hours, the run will be stopped at 3 hours and as a result, restart and history files at 3 hours will be generated. BTW, it is acceptable that you have the 12-hour file instead of 6-hour file. The model will interpolate the EXBC between 3-hour and 12-hour.

Q: A number of questions concerning using NMC RUC data in EXT2ARPS.

(1). Does ext2arps support NMC RUC data? I found only getgemcruc (GEMPAK RUC) instead of getnmcruc in makearps.

Yes, arps4.1.5 or later supports NMC RUC data sets in GRIB format.

(2). I think subroutine RDNMCGRB (in rdnmcgrb3d.f) will do de-GRIB for me. Then how to set the dimensions nx_ext, ny_ext and nz_ext (in extdims.inc)?

Yes, it does. It should be nx_ext=81, ny_ext=62, nz_ext=25

I've tried using wgrib, got things like: nx 81 ny 62. Are they the values as required? Any other methods to do this?

Exactly. You may use wgrib or gribscan. The latter is distributed with GrADS.

(3). How can I tell ext2arps the origin of external fields, to make it know the relative location of external grid to arps grid?

You don't need to do this if you are dealing with the supported data. Otherwise, you will have to provide the projection parameters in order to set up the external data grid. They are

iproj_ext - Projection scale_ext - scale factor (usually 1.0) latnot_ext - latitudes where the projection is true trlon_ext - longitude where the projection is true lat0/lon0 - latitude/longitude at a specific grid point x0, y0 - x and y coordinates at the specific grid point (lat0/lon0)

For RUC and ETA, these parameters are included in their data files. The ext2arps should be able to find them.

Q: I've been using the NCEP mso awips files as input to ext2arps. There are some additional variables available in the GRIB files that look like they might be useful. I just wanted to check if there were any plans to make use of any of these.

In the 3D file, there is variable 153. My NOAA variable tables don't have 153, but in looking at the fields in savi3D it looks like its a specific humidity for total condensate.
In the SFC file:
variable#
223 Plant Canopy Surface Water
86 Soil Moisture Content (2 levels)
85 Soil Temperature

A: The ext2arps supports NCEP mso awips files as input data. The variables the program picks up are based on current ARPS model needs. However, it's not difficult to add more variables into the retrieval list. What you need is to add your own converting/processing code into ext2arps. To select variables, you should modify file gribcst.inc. Please read the comments in that file and file README.grib2arps included in ARPS package.

Q: I am trying to use the ETA model output to initialize and provide boundary conditions for ARPS.

One thing I haven't been able to figure out is what to set the variables in "extdims.inc" to.

For the #212 grid nx_ext = 185 ny_ext = 129 nz_ext = ???

The z dimension isn't available as a standard part of the Grib data format (section GDS). Is it important or should it be set to match the z dimension of the model itself?

A: The dimension parameters in extdims.inc are for the external data set. There are several data sets for eta products. GRIB #212 AWIP3D data is in pressure coordinates with 39 levels (50 75 100 ... 950 975 1000 mb). Then you may set nz_ext=39. It is not necessary that nz_ext equals to the the number of total vertical levels for those 3-d variables. The program will read in all selected 3-d variables and assign them to individual arrays for each variable with the number of levels = min(var_nr3d,nz_ext) where var_nr3d is the number of vertical record of 3-d variables. However, the interpolation program assumes nz_ext as the number of vertical levels in external data set. If the actual number of levels is smaller than nz_ext, there will be a problem because some levels contain no data. T Kwok wrote:

Q: I am using ETA model fields to initialize ARPS. When I plot out the resulting fields (after initializing with arps), the lowest layer has a reference height of -.375 (-375meters). With dz = 750 I assumed this to represent the bottom of the lowest layer (from -375 to 375). The problem is that the pressure values at this level look as if they have been interpolated to a level below ground. I'm getting values in the 1084.0mb range.

A: The GrADS data are written at scalar points in all directions. In vertical, the ground surface is at k=2 W-point, while model low boundary is at k=1. The first scalar point k=1 is half of dz below the ground.

-------------- W ------------- k=3
--------------  S ------------- k=2
-------------- W ------------- k=2 ground surface z=0
--------------  S ------------- k=1 z=-dz/2
-------------- W ------------- k=1

See ARPS 4.0 User's Guide on page 154 as reference. This is why the lowest level in GrADS data is -dz/2

Q: Because GrADS output is in successive layers with no interpolation, the addition of terrain in the model produces some distorted fields (fields on the computational grid). It doesn't make much sense for me to store fields in GrADS format if they need to be interpolated before displaying. Does this sound correct or am I overlooking a simple and obvious solution.

A: You are correct that GrADS does not support terrain-following coordinates, therefore the display fields are distorted in the presence of terrain. We have recently written a program called ARPSINTRP that can interpolate the ARPS history data set to a rectangular grid before writing it out in another history dump format, including GrADS. We use GrADS to for quick looks at the model fields, but use ARPSPLT for most of the publication quality plots.

Q: Can ARPS be initialized with arbitrary GRIB files (or just NMC grid #212)? The source code comments seemed to indicate only #212 and I'm unable to get sufficient data from another grid at this time.

Running ext2arps with a GRIB file from another grid threw an error, but I was unsure whether that was the result of a 212-only limitation, or the result of me not having the right parameters in the other GRIB file.

A: Currently, ext2arps can only support NMC grid #212 40km AWIP3D eta. data and #87 60km RUC data. Since different data sets need different procedures to convert them into ARPS grid, it is almost impossible to create a general grib data converter. We have to target on individual data sets. However, we have tried to make the grib data reader and decoder (obtained from NMC), which are the keys to the converter, as general as possible. Therefore, users may write their own conversion code using the grib reader and decoder. This should make the task much easy. Please read README.grib2arps for more information.

Back to top of page


Terrain and Surface Characteristics Data
Back to top of page

Q: We had problems getting the Cray version of ARPS to read a terrain data file that we created using our own program. ARPS ends abnormally when trying to read the first record in the file although it opens the file fine. The write statements we used to create the file appear to be identical to those in arpstern12.f. ARPSTERN. What's happening?

A:
ARPS uses the IEEE binary format (the one used by most 32 bit Unix workstations) rather than the Cray native binary format for the terrain file (and other binary input and output files). This is achieved by calling two Cray routines

CALL asnctl ('NEWLOCAL', 1, ierr)
CALL asnfile(tern_file,'-F f77 -N ieee', ierr)

before the open statement for file tern_file. The opened file is then assumed to be in the IEEE format. You need to add these two statement before the open statement of your program that creates the terrain data, so that the generated data are written in the IEEE format.


Q: I am having problems with the terrain files. Is the terrain file called dir30sec.dat?

A:
There are two steps that you have to go through to generate the final terrain file for ARPS40.

First you need run the three dir*.f programs to convert the original ASCII terrain files to direct access files;, which are machine dependent. You need to do this only once if you keep these files. The input file for these programs is arpstern.input.

The direct access files (each set consists of two files, one for data, the other for record description). are what you need to run ARPSTERN (using arpstern.input). The final product, arpstern.dat, should be used by ARPS and the name of this file is specified in arps40.input. After it is created, the name may be changed to better describe its contents; note that the name is also specified in arps40.input.
Section 8.2 provides a more detailed description on these preprocessing steps for terrain data.

Q: I want to run ARPS with 'real' terrain out of our own terrain database. What format is necessary for the terrain-file?

A:
You can find out the format of the arps terrain data by looking at subroutines WRITTRN and READTRN that write and read the terrain data. They are in iolib3d.f.

Q: Is the ARPS package useable for any other area than the southern United States?

Yes, the ARPS is applicable anywhere in the world. The model comes with terrain data and a pre-processor, and of course you can use your own terrain if you have access to special databases. More information can be obtained on our web site under the ARPS 4.0 official release, ARPS characteristics. The large set of bullets talks about the terrain database. Let me know if I can be of further assistance!

Q: I'm trying to view the topography generated by the ARPSTERN program using the GRaDS special script.

A: If you have NCAR graphics installed on your machine, and you compile arpstern with -ncar option (makearps -ncar arpstern) you will be able to get the plots in user's guide. The default turns the plotting off.

If you do not have NCAR graphic, you can install ZXPLOT graphics (ftp://wwwcaps.ou.edu/pub/ZXPLOT) and use ARPSPLT or mergetrn.f program to plot the terrain fields. For the former, you will need to run ARPS first to general a ARPS history data that can be fed into ARPSPLT. The latter plots terrain only. It tries to combine two versions of terrain data. If you specify two files that are the same, it will plot on one.

Back to top of page


Graphic Plotting and Visualization
Back to top of page

Q: When I tried to compile and link the plotting program by doing makearps arpspltncar, I got a message saying 'zxncarf77 command unknown'.

A:
The release of ARPS does not come with the ZXPLOT library used by the plotting program ARPSPLT. Install ZXPLOT library first. See the next question.

Q: How do I install ZXPLOT?

A:
The compiled object code of ZXPLOT library for IBM RS/6000, Cray Y-MP. Cray YMP-C90, Sun SparcStation, DEC Alpha running Ultrix or OSF1 Mach UNIX and HP-UX are available in pub/ZXPLOT of the CAPS anonymous FTP server ftp.caps.ou.edu.

You need to transfer the appropriate tar file for your system and place the object codes in one of your permanent directories. You need to edit the link scripts, zxncarf77, zxpost0f77 and zxpostf77, so that they point to the location where the object codes reside. You should then make these scripts executable and add the directory name that contains these script files into your command search path. Chapter 12 provides more information on ZXPLOT.

It is possible that the object codes provided are not compatible with the system version you are using. In some cases, we have to gain a temporary access to your system to install the package for you.

For availability of ZXPLOT on other systems, contact arpssupport@ou.edu.

Q: What type of graphics package is available to us for animation of model results? We currently are using ZXPLOT or GRADS plotting packages, but unfortunately cannot animate the 2d and 3d model results. Do you use a special package? If so, how can we get it?

A:
The latest version of NCAR Graphics metafile displayer idt comes with a animation function, that can be used to build an animated sequence of 2-D images. GraDS also has 2-D animation capabilities. If you are only interested in 2-D color raster images, ARPS can produce HDF image file sequences to be played back by NCSA Ximage.

At CAPS, we have a commercial software Savi3D for 3-D visualization and animation. Other visualization packages inlcuding DataExplorer and AVS (Advanced Visualization System) are also available locally.

ARPS was interfaced with a free software Vis5D by the Lawrence Livermore group with little effort. Vis5D is available from anonymous FTP site iris.ssec.wisc.edu. If you are interested in 3-D visualization and/or animation, we suggest you consider using Vis5D.

Back to top of page

ZXPLOT Graphics Package
Back to top of page

Q: When I tried to compile and link the plotting program by doing makearps arpspltncar, I got a message saying 'zxncarf77 command unknown'.

A:
The release of ARPS does not come with the ZXPLOT library used by the plotting program ARPSPLT. Install ZXPLOTlibrary first. See the next question.

Q: How do I install ZXPLOT?

A:
The compiled object code of ZXPLOT library for IBM RS/6000, Cray Y-MP. Cray YMP-C90, Sun SparcStation, DEC Alpha running Ultrix or OSF1 Mach UNIX and HP-UX are available in pub/ZXPLOT of the CAPS anonymous FTP server ftp.caps.ou.edu.

You need to transfer the appropriate tar file for your system and place the object codes in one of your permanent directories. You need to edit the link scripts, zxncarf77, zxpost0f77 and zxpostf77, so that they point to the location where the object codes reside. You should then make these scripts executable and add the directory name that contains these script files into your command search path. Chapter 12 provides more information on ZXPLOT.

It is possible that the object codes provided are not compatible with the system version you are using. In some cases, we have to gain a temporary access to your system to install the package for you.

For availability of ZXPLOT on other systems, contact arpssupport@ou.edu.


Q: What type of graphics package is available to us for animation of model results? We currently are using ZXPLOT or GRADS plotting packages, but unfortunately cannot animate the 2d and 3d model results. Do you use a special package? If so, how can we get it?

A:
The latest version of NCAR Graphics metafile displayer idt comes with a animation function, that can be used to build an animated sequence of 2-D images. GraDS also has 2-D animation capabilities. If you are only interested in 2-D color raster images, ARPS can produce HDF image file sequences to be played back by NCSA Ximage.

At CAPS, we have a commercial software Savi3D for 3-D visualization and animation. Other visualization packages inlcuding DataExplorer and AVS (Advanced Visualization System) are also available locally.

ARPS was interfaced with a free software Vis5D by the Lawrence Livermore group with little effort. Vis5D is available from anonymous FTP site iris.ssec.wisc.edu. If you are interested in 3-D visualization and/or animation, we suggest you consider using Vis5D.

Q: I should be grateful if you could inform me of the procedure I should use to get a copy of the necessary plotting routines and tools as mentioned on page 16

A: That was ZXPLOT I suppose. You can look at our ftp site, ftp://ftpcaps.ou.edu/pub/ZXPLOT, to see if there is a package fit your new platform. The ZXPLOT has been compiled for various platforms, such as rs6000/aix, cray c90/j90/ymp, alpha osf1/utrix sun4 and sparc, sgi iris, etc., even linux.

Back to top of page


Output Data Format
Back to top of page

Q: I tried to generate HDF format history dumps by specifying the format flag hdmpfmt as 3, but got a message saying that the program stopped in HDFDUMP; what was I doing wrong?

A:
First, to read or write HDF format data, you need to make sure that the HDF library is installed on your system. HDF is a data format developed at NCSA, the National Center for Supercomputing Application at the University of Illinois. The source code is freely available from ftp.ncsa.uiuc.edu.

Secondly, you need to include option -io hdf for makearps, so that the HDF data I/O routines are properly linked.

The same is true for NetCDF format.

Q: We tried to plot ARPS history data generated on the Cray on our DEC Alpha workstations using ARPSPLT. The program failed when trying to read the data. We guessed that the binary data from Cray are not compatible with DEC Alpha. If that's true, what is the best solution?

A:
ARPS generates history dumps in IEEE instead of the Cray native binary format. The IEEE data can be read on the IBM RS/6000, Sun and IRIS but not on a DEC Alpha (as far as we know).

A suggested solution is to use HDF or NetCDF format for the history dumps. To do this, you need to have HDF or NetCDF library installed on both of your machines. The source code of both libraries can be obtained freely, from addresses given in Section 10.1.

Q: Does Arps4.2.1 support Netcdf. I am trying to initialize ARPS with some external radar data that I processed using CEDRIC.

A: We stopped supporting NetCDF starting from 4.1.5 because no one is using it. It can be plugged back in if needed, but that may not be what you are looking for.

Basically, a user is free to use whatever I/O method it takes to read-in their data (what we call "external" data). We've just prepared code for external data that we're using ourselves (GEMPAK, GRIB, etc).

You could use our old NetCDF history file reader as a guide, but it might be easier to use a CEDRIC data reader program and getgemruc as guides. Also, does CEDRIC have all the variables ARPS needs (at a mimimum ARPS needs height,temp,press,wind)? If not, you may need to write a little intermediate routine anyway to blend data from a sounding and/or another model with CEDRIC data.

Back to top of page


Code Performance and Platform Dependency
Back to top of page

Q: Are there benchmark tests that reveal how the ARPS performs on different computing platforms?

A: Click here to view performance statistics of the ARPS on selected platforms.

Q: We had problems getting the Cray version of ARPS to read a terrain data file that we created using our own program. ARPS ends abnormally when trying to read the first record in the file although it opens the file fine. The write statements we used to create the file appear to be identical to those in arpstern12.f. ARPSTERN. What's happening?

A:
ARPS uses the IEEE binary format (the one used by most 32 bit Unix workstations) rather than the Cray native binary format for the terrain file (and other binary input and output files). This is achieved by calling two Cray routines

CALL asnctl ('NEWLOCAL', 1, ierr)
CALL asnfile(tern_file,'-F f77 -N ieee', ierr)

before the open statement for file tern_file. The opened file is then assumed to be in the IEEE format. You need to add these two statement before the open statement of your program that creates the terrain data, so that the generated data are written in the IEEE format.

Q: We tried to plot ARPS history data generated on the Cray on our DEC Alpha workstations using ARPSPLT. The program failed when trying to read the data. We guessed that the binary data from Cray are not compatible with DEC Alpha. If that's true, what is the best solution?

A:
ARPS generates history dumps in IEEE instead of the Cray native binary format. The IEEE data can be read on the IBM RS/6000, Sun and IRIS but not on a DEC Alpha (as far as we know).

A suggested solution is to use HDF or NetCDF format for the history dumps. To do this, you need to have HDF or NetCDF library installed on both of your machines. The source code of both libraries can be obtained freely, from addresses given in Section 10.1.


Q: I ran a simulation with ARPS 4.0 on a Cray supercomputer. I wanted to do a 1 hr integration, but the job didn't finish, even after 10 CPU hours. The simulation included ice.

Then, I submitted two short jobs to the Cray using ARPS 4.0. The only difference between the two is that one included ice and the other didn't. The job without ice performed the 600s integration in 1995 CPU seconds. After 1 CPU hour, the job with ice had only integrated out to 287s. So the simulation with ice takes much longer. Does this seem to be right?

A:
Our .i.warm rain microphysics; subroutine is maximally optimized. We evaluate the power functions in the package using lookup tables, which are much more efficient than direct evaluations.

The ice .i.ice microphysics;code has many exponential and power calculations. Our tests showed that the ice subroutines used 8 times as much CPU as the warm rain microphysics (see Chapter 11 on code performance).

There is one big loop that, by itself, cannot be vectorized on the Cray without including a special compilation option. You need to do 'makearps -ice arps40', so that file micro_ice3d.f is compiled with -aggressive option. You may need to delete the old micro_ice3d.o (see Chapter 3 on makearps options) before re-doing makearps.

Q: I'm running the model in 2-D mode, and am using nx = 1003 and nz = 45 (along with ny = 4). This requires about 13 MW of memory on the Cray, which seems excessive. Some of it is probably due to the way that the y-direction is handled, and some may be due to the fact that it appears that unused arrays (such as those for surface physics) are fully dimensioned even if that package is not used. I suggest putting in a parameter option at compile time that allows the user to completely deactivate modules (terrain, surface physics) that are not needed, so that the storage associated with their arrays can collapse down to a minimum size. (Also, collapse ice storage arrays when ice microphysics is not used). What do you think?

A:
The memory usage is about right. ARPS has 66 3-D arrays, which use 66x1003x45x4= 12 MW in your case. The 2-D surface arrays account for a small amount of memory. ny = 4 is the main reason for the "excessive" usage, and this is a penalty we chose to pay in return for maintaining a single 3-D version of the code. You can easily save the storage used by the ice variables by adding an equivalence (qi, qs, qh) statement in the declaration portion of the ARPS40 main program. If you are not using tke, you can add tke to the equivalence list too. As a result, qi, qs, qh and tke would occupy the same memory space.

In ARPS, 3-D arrays used by the externally-forced boundary condition option can be collapsed to 1x1x1 arrays. The dimensions are set in exbc.inc. ARPS defines 15 3-D work arrays that are reused frequently throughout the code. We tried to strike a balance among the memory and CPU usage and the code readability. We also assume that memory on today's computers is relatively cheap.

Your suggestion on using options at compile to turn off certain packages completely is a very good one. But this typically involves using code preprocessors (conditional compilation is not generally available with Fortran) that will increase the procedure complexity and might affect the code readability. At this time, we chose to handle everything with make and the script makearps.

Back to top of page

Model Physics
Back to top of page


Q: To be asked.

A:
To be answered.

Back to top of page