How to install COMPHEP on one of the CDF machines

Getting the code

If you do not have a copy of the CompHEP package, download one from the official page. Currently a version of 4.1.10 is available locally at /cdf/data23a/COMPHEP/CompHEP_V_41.10.tar.gz. An unofficial but later version can be found at Version 4.2.

Unpacking and compiling the code

Once you have moved the tar file to a convenient location (for example, /cdf/data22d/foo/comphep), unpack and compile the code:

  mkdir /cdf/data22d/foo/comphep
  mv CompHEP_V_41.10.tar.gz /cdf/data22d/foo/comphep
  cd /cdf/data22d/foo/comphep 
  gzip -cd CompHEP_V_41.10.tar.gz | tar xvf -
  cd V_41.10

Setting the COMPHEP environment variable

Next, you should define the environment variable COMPHEP to point to your version of the code. In this example, we would use:
  setenv COMPHEP /cdf/data22d/foo/comphep/V_41.10
whereas you should substitute the appropriate location for your CompHEP directory. For added convenience, you should consider adding this command to your login file.

Creating a working area

Each time you wish to generate a new process, you will need to create a working directory and copy over any necessary files from the area pointed to by $COMPHEP. This last part is performed using the install command included with CompHEP. For example, we could create a directory called wgamma and perform an install there:

  mkdir /cdf/data22d/foo/comphep/wgamma
  cd /cdf/data22d/foo/comphep/wgamma
You should see the following files and subdirectories:*  comphep*  comphep.ini  models/  results/  tmp/  usr/  usrlib.a

If you do, you are now ready to start using CompHEP.

Running CompHEP

From the working directory that you created for your process, run the comphep executable:
  ./comphep &
This will launch a GUI interface from which you can perform all of the steps to calculate cross sections, generate events, and produce kinematic distributions. You may also choose to use the GUI for defining the process, cuts, and other properties but then use the batch capabilities for actually carrying out the calculations and event generation.

Selecting the process

The first step involves selecting a Model from menu. We have been using the _SM_ud model, which treats the two lightest generations of quarks as degenerate. After that, select Enter Process, and answer the prompts. For a process such as p pbar -> W + gamma, we would enter:
    Enter Process
      Enter process:                p,P -> e1,N1,A
      composite 'p' consists of:    u#,U#,d#,D#,G
      composite 'P' consists of:    u#,U#,d#,D#,G
      Enter CMS Energy in GeV:      1960
      Exclude diagrams with:        

Creating the code for performing the calculations

The next several steps are designed to produce the code which will be used later for any numerical calculations. Briefly, you should do:
  View diagrams                   shows all the diagrams for our process

  Squaring technique
    View squared diagrams         
    Symbolic calculations         
    Write results                 
      C code (for num.calc)       outputs the necessary code
    C-compiler                    compiles the code
At this point, a terminal window should appear which shows the compilation of the C code. If compilation is successful, this window will close and a second CompHEP window will pop up.

PDF's, cuts, etc.

In this second window, we set the PDFs for the incoming particles, the event cuts to be applied, and any regularization (to avoid blowing up at the poles in the cross section calculation). For these menus, use the F1 key for help and ESC to return to the previous menu.

For the PDF's, we use CTEQ 5L:

  IN state

The possible kinematic cuts are detailed in the Help menu (F1). These include Tx (transverse momentum, P_t, for particle x), Mxy (invariant mass of x & y), Jxy (jet cone angle between x & y), Nx (pseudorapidity for x). Note that the particle indices x and y correspond to the ordering of the particles in the subprocess, not the process.

For a process such as W gamma (where the subprocess looks like u# D# -> e1, N1, A), we might use cuts that look like:

    Parameter  |> Min bound <|> Max bound <|
    T3         |25.0         |             |
    T4         |25.0         |             |
    T5         |25.0         |             |
    N3         |-1.0         |1.0          |
    N4         |-1.0         |1.0          |
    N5         |-6.0         |6.0          |
    J34        |0.4          |             |

The regularization is a slightly more tricky issue. For situations where the cuts are extremely loose or completely undefined, the regularization appears to be necessary to avoid peaks in the cross section where the calculation might blow up. This is true for W's and Z's, but also for photon colinearity.

Entries in this table may look like:

    Momentum    |> Mass  <|> Width <| Power|
    45          |MW       |wW       |2     |
    13          |0        |0        |1     |
    23          |0        |0        |1     |
    34          |0        |0        |1     |
    345         |MW       |wW       |2     |
However, we have found that if a more restrictive set of cuts is used, entries in the regularization table may cause the calculation to crash, with the message ERROR IN REGULARIZATION. If such a message should appear, the job may appear to continue unabated, but surrender all hope of any results being produced. I have found that, in general, it is best to work without anything defined here if any reasonable set of cuts is used.

To batch or not to batch

Once you reach this stage, you may continue with the interactive GUI or exit to use the batch mode. If you continue with the GUI, then do:
(more explanation is forthcoming; ask for details)
    Start integration
    Clear statistics
    nCall = 50000
    Set Distributions
    Start integration
    Generate events
      Start search of maxima
      Number of events=(set value)
      Launch generator
Or, if you decide to go with the batch capability, exit the GUI completely using F9 in both windows. Then, from the working directory for your process (not the results directory):
  ./ -run vegas     (starts the iterations to calc. crossection)
                            (keep doing the above until you get to 1% or so)
  ./ -run           (performs  "-run vegas,max,evnt"  all at once)

How to run COMPHEP

To select, use arrows to highlight selection; ENTER to chose. ESC takes one back one menu. If the right choice is already highlighted, just hit ENTER. At the bottom of the main window (first to pop up) are functions you can click on- they are very useful.

How to Run CompHEP Numerical Calculations in Batch Mode

If you installed the version of CompHEP with batch mode, the file should be in your working directory (e.g. comp_user). You may need to correct the path to perl on your computer in the first line of the script, and/or change the permission of the file (chmod +x

The perl script ./ runs the numerical part of the calculations without the GUI. You can start this script after doing all the symbolic calculations. This script runs the n_comphep file in the subdirectory results. For help type
./ --help.

First setup the parameters of the calculations; The easiest way is with the GUI:
cd results

Set the parameters you want (PDF, Cuts, ...) for the first subprocess. Exit the GUI. Type:

cd ../

The file results/batch.dat will be created automatically with the same parameters for the all of subprocesses.

If you want to change parameters for some specific subprocesses, run the GUI again, choose the subprocess, set the new parameters, and exit to save the session.dat file. Alternatively, another way of saving the session.dat file is by stepping to the last step in the menu, Vegas, and then stepping back. You can add the parameters from the session.dat file for this particular subprocess by:
./ -add_ses2bat

The default number of generated events is 10000; to change this, edit the line in the results/batch.dat file:
#Events 100 1 0.200000 2.000000 10000
This needs to be changed for each of the subprocesses as appropriate; you can look at the cross-sections per subprocess (see below) to tailor the number of events per subprocess.

After setting the parameters for all of the subprocesses you are ready to run the calculations.
./ -run vegas
This command (see below) makes the grid and gives you an estimate of the precision- keep doing this step until you get to 1\% or so. Then:
./ -run
This command starts the steps of the calculations for all subprocesses. If you want to calculate only the cross section, run
./ -run vegas
which calculates the crossections and the Vegas grid, and stores the crossections in the protocol file results/prt_n and the grid in results/batch.dat. You can run selected subprocesses:
./ -run vegas -proc 3,1,4-6,5
(the numbers can be in any order)
This calculates the cross section for the subprocesses 1,3-6

If you want to generate events after you get the cross section with good precision, you can run:
./ -run max,evnt

If you want to know more about the options, run:
./ -h (or --help for the long help)

Notes and tips (the order is somewhat random- sorry!):

To start a new session, or resume an old one

To make an input file to Pythia, and then have Pythia write it out in STDHEP format

Steve Mrenna's Pythia STDHEP Format

This is written by the subroutine pywrite.f, which is called from Pythia's main.f. The subroutine can be found in /cdf/data23a/COMPHEP/cpyth62/interf62. Here it is:
C...HEPEVT commonblock. 
      PARAMETER (NMXHEP=4000) 
      DATA KNT/0/ 
      SAVE KNT 
      SAVE /HEPEVT/ 

      DO I=1,NHEP 
     $ (JDAHEP(J,I),J=1,2),(PHEP(J,I),J=1,5),(VHEP(J,I),J=1,4) 

Reading STDHEP Format with AC++, and running CDFSIM

              Easiaest thing is to copy the executable from cdf23 computer;
              and the .tcl file to run it:
              In this tcl file you have to change the input file and output file to your
              preference(it is self evident). You also have to adjust the number of events;
              always put one less than the number of events in the binary file.

              This will create a .root file that is hepg, this file can
              be inputed in ntuples or cdfSim in the usual way.

              To actually build the  executable yourself. for this do the following:
                 newrel -t 4.2.0 4.2.0
                 cd 4.2.0
                 addpkg generatorMods
                 copy the following files (these files have not been updated
                 in generatorMods yet) in the corresponding directories:
                Then from base directory of the release do: gmake, this will build cdfGen,
                 Then run cdfGen with the .tcl file mentioned before.

                First you need to build the Duke Ntuple, for that, from base directory of your relese do:
                 addpkg -h dukehkg
                 gmake clear
                 gmake dukehkg.all
                In the bin directory now you should find an executable called myeleNew. This is theone. To
                run it copy the .tcl filr from cdf23 computer: /home/cdf/carron/4.2.0/TalktoSin.tcl to the
                bin directory and do: myeleNew TalktoSim.tcl >&logfile. You must remember to edit the
                .tcl file to have as input your Hepg bank file, and edit the output file location and name to your
                 After this you will end up with a root file containing the filled Duke ntuple, the parton information
                 is in the branch HEPG.
                If you have processed the events with a different ntuple from the Duke Ntuple, you will need to
              generate the root skeleton and then copy the relevant parts of the code below. At the en of this
              section there are the instructions.

              Copy the files from cdf23 computer:  /cdf/data23a/MC_data/CPythia_HEPG/ttbar/ttbar_hepg.C
              and  /cdf/data23a/MC_data/CPythia_HEPG/ttbar/ttbar_hepg.h (This is a sample analysis for the ttbar)
              You MUST edit the .h file above to have the correct input ntuple file to the macro.
              Also edit the .C file at the end to have the name of your output file for your plots (don't overwrite my files!!!)
              open a root session and in the root prompt do:
                Root > gSystem->Load("$ROOTSYS/lib/");
                Root > .L ttbar_hepg.C
                Root > ttbar_hepg t
                Root > ofstream fout("nameofyourfile.txt", ios_base::app);
                t.GetEntry(12);  (Here you select the event you want to Dump)
                Root > t.dump_hepg(fout);
              When you exit root you can see that you have created a text file with the details of the Hepg bank for
              the event selected.

               Now to create the Plots (I am assuming you have edited already the name of the input file and output
               histogram file) open a root session and in root do:
                Root> gSystem->Load("$ROOTSYS/lib/");
                Root > .L ttbar_hepg.C
                Root > ttbar_hepg t;
                Root > t.Loop();
               The plots will be created (You can alted the .C file to have the plots the way you want them, or select
               the particles you want, this is just an example for the ttbar case).
               To view the plots do: root filename.root
                in root: Root > TBrowser browser
               and clic your way to the plots.

           If Ntuple different from Duke: (Comming soon ...)

Links to COMPHEP home page, documentation:

COMPHEP home page
UC CDF Group home page
Pythia home page

Where Monte Carlo Files are Kept

Wish List

This is a wish list for features that would be handy. It's made out of ignorance-- some of these features may already exist; others perhaps should never exist. Apologies for the forwardness in making it -- feel free to edit it, adding explanations on how to do these things, or on why one shouldn't want to do them. HJF

Big Wishes

Little Wishes

Created by:

Lev Dudko,
Henry J. Frisch (frisch @ hep . uchicago . edu),
Sebastian Carron (carron @ phy . duke . edu), and
Chadd Smith (chadd @ cdf . uchicago . edu)

Last updated May 20, 2003 by Henry Frisch