Next: THOR
- MOLECULAR FORCE Up: STOCHASTIC
MOLECULAR OPTIMIZATION USING Previous:
INTRODUCTION
Simulated Annealing methods have been applied successfully in the description of a variety of global extremi- zation problems. Simulated Annealing methods have attracted significant attention due to their suitability for large scale optimization problems, especially for those in which a desired global minimum is hidden among many local minima. The basic aspect of the Simulated Annealing method is that it is analogous to thermo- dynamics, especially concerning the way that liquids freeze and crystallize, or that metals cool and anneal. The first nontrivial solution along this line was provided by Kirkpatrick et al [6] [7] for classical systems, and also extended by Ceperley et al [8] to quantum systems. It strictly follows the quasi-equilibrium Boltzmann- Gibbs statistics using a Gaussian visiting distribution, and is sometimes referred to as Classical Simulated Annealing (CSA) or Boltzmann machine. The next interesting step in this subject was Szu's proposal [9] to use a Cauchy-Lorentz visiting distribution, instead of a Gaussian distribution. This algorithm is referred to as Fast Simulated Annealing (FSA) or Cauchy machine.
On the other hand, a Generalized Simulated Annealing (GSA) or Tsallis machine approach which closely follows the recent Tsallis statistics [10][11] has been proposed by Stariolo and Tsallis [1]; GSA generalizes both the FSA and CSA procedures.
We have implemented the GSA algorithm as a method to calculate the minimum energies of conformational geometries for different molecular structures. This technique can be indistinctly applied in quantum [5] or classical methods. In our case, we have used the latter. The GSA methods is based on the correlation between the minimization of a cost function (conformational energy) and the geometries randomly obtained through a slow cooling. In this technique, an artificial temperature is introduced and gradually cooled in complete analogy with the well known annealing technique, frequently used in metallurgy when a molten metal reaches its crystalline state (global minimum of the thermodynamics energy). In our case the temperature is intended as an external noise.
The artificial temperature (or set of temperatures) acts as a convenient stochastic source for eventual de- trapping from local minima. Near the end of the process the system hopefully is within the attractive basin of the global minimum. The challenge is to cool the temperature as fast as possible and still have the guarantee that no irreversible trapping at any local minimum has occurred. More precisely, we search for the quickest annealing (approaching a quenching) which maintains the probability of finishing within the global minimum equal to one.
The procedure to search the minima (global and local) or to map the energy hypersurface consist in com- paring the conformational energy for two consecutive random geometries and obtained from the GSA routine. is a N-dimensional vector that contains all atomic coordinates (N) to be optimized. The geometries, for two consecutive step, are related by
where is a random perturbation on the atomic position. To generate the random vector the present GSA routine use an extension of the procedure used in reference [1]. We have calculated the using a numerical integration of the visiting distribution probability . Where is a random vector [0,1] obtained from an equi-probability distribution and is the inverse of the integral of given by:
where q (q ) is the acceptance index (visiting index).
The integral of has an analytical solution only in case of (q ; q ) = (1; 1) or (q ; q ) = (1; 2). For the general case [12] it is necessary to make a numerical integration. In this paper, has been calculated using a series integration and taken the inverse of a polynomial series, whose expansion is cut off at the 17 order.
The integral in the equation 2 can be written as
where , , .
Solving the equation 4 using power series, we have;
or
We ask for the inverse function , such that . From Eq.(6) we see that the inverse function may be expressed in terms of a power series
by virtue of the theorem : Let be analytic at , and , then the inverse of exists and is analytic in a sufficiently small region about and its derivative is .
The coefficients may be expressed in terms of by introducing Eq.(7). However, it is possible to derive the coefficients more elegantly by employing Cauchy's formula. In this case the take the general form
The first few coefficients can be expressed as;
This last inversion was introduced in the GSA package [12], while the original proposition [1] use a Lévy-Flight distribution as the visiting distribution.
In summary, the whole algorithm for mapping and searching the global
minimum of the energy is:
(i) Fix the parameters (q ; q ) (we recall that (q ; q ) = (1; 1) and (1; 2) respectively correspond to the Boltzmann and Cauchy machines). Start, at t = 1, with arbitrary internal coordinates and a high enough value for T(1) (visiting temperature) and cool as follows:
where t is the discrete time corresponding to computer iteration.
(ii) Then randomly generate the new atomic coordinate from as given by the visiting distribution probability as follows:
where
is the gamma function. For sufficiently long time simulations this procedure
assures that the system can both escape from any local minimum and can
explore the entire energy hypersurface. This equation is used in the GSA
routine and differs from general propose give by [1]
because we built a minimization vector using (1).
This change is due to the isotropy of the space used [12]
and do not due to the simulation of Lévy-Flight stable stochastic
process, like [1].
(iii) Then calculate the conformational energy E(
) from the new molecular geometry by using the classical force field computational
code (THOR). The new energy value will be accepted following the relationship:
if E( ) ( ), replace by ;
if E( ) >E( ), run a random number r [0,1]:
if r
(acceptance probability) retain
; otherwise, replace
by
.
Or the algorithm;
with
(t) =
(t).
is the acceptance probability function.
(iv) Calculate the new temperature (t) using (10) and go back to (ii) until the of E( ) is reached within the desired precision.
In order to make clear the procedure to construct the used computational code, we present the following flowchart in Fig. 1;
In the next section we will present some consideration about the THOR molecular modeling computational code.