Nanocell Optimization Techniques

Todd Dolinsky

Department of Computer Science
Duke University
Summer, 2001


Table of Contents

I.  Background of the Nanocell Optimization Problem
II. Optimization Algorithms
     A. Nelder-Mead Simplex (Amoeba Algorithm)
          1. Background
          2. Code and Modifications
          3. Results
III. References


I. Background of the Nanocell Optimization Problem

     A nanocell is a product that can be programmed to function as any of a large range of standard logic gates. Although each nanocell is only a few square micrometers, each nanocell contains many small molecules that function as a "switch", allowing current to pass through if in the "on" position (conducting), or blocking current in the "off" position (insulating). It is the network of these switches that creates a logic gate, and can be trained to do so post-fabrication by an optimzation algorithm. By optimizing the condition of the switches, a programmed nanocell can functionally act like a logic gate . Currently, a genetic algorithm is used to determine the optimal combination of internal switches in order to obtain a logic gate.

     Along each edge of the nanocell, there are pins that can be classified as either input, output, or control pins. Currently, in the nanocell simulation, the control pins are not varied with respect to voltage, and remain constant throughout the optimization process. Our goal is to vary the voltage on these pins in order to find a more optimal solution to the nanocell problem. Once the internal switches have been set by the initial genetic algorithm, a second optimization can then be run (at lower voltages so as not to reset the internal switches) to find the desired value of the control pins. The following work discusses the different optimization algorithms (and modifications made to these algorithms) in order to find the most accurate and efficient solutions to the nanocell optimization problem.

     An article on molecular electronics was recently featured in Scientific American. To learn more about the subject, click here to read the article.

Return to Table of Contents


II. Optimization Algorithms

     Optimization algorithms are needed in order to set the nanocell to behave like a logic gate. There are many different optimization algorithms, and rarely is there one "correct" algorithm for any specific problem. In the case of the nanocell, the optimization algorithm needs to be a nonlinear multidimensional algorithm (one dimension for each control pin we wish to vary). The reason for a nonlinear algorithm is that there is not a discrete set of points to pick from - and thus there is no way to test every possible point-solution. Even if this was the case, such a test would be inefficient, as every point would need to be sampled in order to find the solution. A nonlinear algorithm thus can handle problems which are not discrete, and do so in an efficient manner.

Return to Table of Contents

A. Nelder-Mead Simplex (Amoeba Algorithm)

Background

     The Nelder-Mead Simplex algorithm, also known as the Amoeba Algorithm, is based on a simplex algorithm and is used to for minimization or maximization of a multidimensional function. In order to understand the modifications in the Nelder-Mead algorithm, one must first understand the standard simplex algorithm:

     The basic simplex algorithm involves taking N + 1 test points (where N is the number of dimensions), which is called a simplex. A one dimensional problem would have a line as its simplex, a two dimensional problem would have a triangle as its simplex, and so on. All of the points of the simplex are then evaluated. The worst point in the simplex is then thrown out - and a new point is found by "reflecting" away from the worst about the axis formed by the other test points. This process is repeated until the points stabilize and a minimum (or maximum) is found. To see a visual example of the basic simplex algorithm, click here.

     The Nelder-Mead Simplex Algorithm takes its concept from the basic simplex algorithm. However, it differs in that if the simplex takes a reflective step in a minimizing direction, it will expand, while if it takes a step in the wrong direction, it will contract. These features allow the simplex algorithm to quickly reach the area of the minimum by increasing the volume of the simplex, and then to efficiently find the optimal point by decreasing the volume of the simplex and therefore the breadth of the search near a minimum. When finding an optimum point, this decrease in volume allows for greater efficiency than the basic simplex method.

Return to Table of Contents

Code and Modifications

     In order to attempt to solve the nanocell optimization problem, I coded the amoeba algorithm in Visual C++. The basis for this code (and a brief description) appears in the book Numerical Recipes in C, from Chapter 10.4. In this version of the amoeba algorithm, there are several constants that may be changed in order to vary the efficiency and correctness of the results of the algorithm. These constants are the major source of how the amoeba algorithm can be modified to improve results for the nanocell problem.

     One of the first constants that I varied in order to obtain modified results was the contraction factor, which is the percentage of how much the simplex decreases in volume during any contraction step. In the original Nelder-Mead simplex algorithm, this constant is set to 0.5 (50%). However, this value, while quite functional, allows for two undesired situations. The first of these issues is that since the volume of the simplex is decreased by half during each contraction step, if a local minimum is approached, the simplex may already be too small to navigate around the local minimum and thus might stop at an undesired point. The second issue is that when the simplex approaches the global minimum, the simplex still contracts at the same rate - and thus many trials are wasted making the simplex marginally smaller while evaluating within a fractional offset from the global minimum. I therefore tested the option of decreasing the contraction factor as a function of how many trials had passed. These possible functions can be seen below.

Figure 1: Possible Contraction Factor Functions for Initial Value of 0.9

The four different contraction factor functions are as follows:

     Constant - keep the contraction factor constant for all trials.
     Linear - decrease the contraction factor linearly as the number of trials increases.
     Exponential - a nonlinear decrease of the contraction factor of the form .5 - x(1/constant).
     Decayed Trig - a second method for nonlinear decrease, of the form .25*cos(Pi*x*constant +.25)*e(-x/constant).

There are several other constants that greatly affect the results of the amoeba algorithm. These constants are:

     Maximum Number of Trials - Number of trials attempted before the algorithm stops. In the initial amoeba algorithm, this value can be an arbitrary large number. When the contraction factor is varied, however, this value may be optimized since the contraction functions are dependant on it (i.e. the slope of the linear function).

     Initial Value of Contraction Factor - The starting point for the contraction factor, between 0 and 1. Values closer to 1 contract (in the constant case) more slowly, while values closer to 0 rapidly contract.

     Termination Criteria - When to stop the algorithm. Currently, I have seen two different functions which are used to determine whether a computed value is less than the specified constant. If it is, then the algorithm stops. The computed value is a function of the average distance between the points (to be added in the near future).

     Size of Initial Simplex - This factor, denoted hereafter as ALPHA, is used to determine the size of the initial simplex according to Spendley's algorithm. Spendley's algorithm spaces N points equally far away from each other corresponding to the ALPHA value. Thus for N = 3, Spendley's algorithm creates an equilateral triangle from an initial guess point. For N = 4, the algorithm creates equilateral pyramid, and so on. Thus with an initial guess point and a size variable, an initial simplex can be created.

Return to Table of Contents

Results

     In order to test the modified algorithms, I found several evaluation functions that have been used in previous optimization tests by other researchers. To begin with, I used three separate two dimensional functions, which can be seen with their respective graphs below.

Rosenbrock's Banana Function

f(x,y) = 100(y - x2)2 + (1 - x)2
Starting Point: (-1.2, 1)
Minimum: 0 at (1,1)

Freudenstein and Roth Function

f(x,y) = (-13 + x + ((5 - y)y - 2)y)2 + (-29 + x + ((y + 1)y - 14)y)2
Starting Point: (0.5, -2)
Minimum: 0 at (5,4) and 48.9842 at (11.41, -0.8986)

Beale's Function

f(x,y) = (1.5 - x(1 - y))2 + (2.25 - x(1 - y2))2 + (2.625 - x(1 - y3))2
Starting Point: (1, 1)
Minimum: 0 at (3,0.5)

Figure 2: Two - Dimensional Test Functions

     For the functions above, I used the amoeba algorithm to find the minimum, and varied the contraction factor as discussed above. The ALPHA constant was also varied. The tolerance of the termination criteria was set at 2.5 x 10-9, and the maximum number of trials allowed was 2500. The results with the fewest trials are in bold.

Rosenbrock's Banana Function

Size Factor Constant Test Linear Test Exponential Test Trig Test
.25 161 164 141 135
.5 137 144 167 154
1 131 131 125 130
2 159 162 126 133
4 145 145 134 142
Avg #: 146.6 149.2 138.6 138.8
% Improvement 0% -1.8% 5.5% 5.3%

Freudenstein and Roth Function

Size Factor Constant Test Linear Test Exponential Test Trig Test
.25 79 81 74 2500*
.5 79 78 67 93
1 86 88 75 82
2 75 74 72 66
4 72 67 63 66
Avg #: 78.2 77.6 70.2 76.8
% Improvement 0% 0.8% 10.2% 1.8%

Beale's Function

Size Factor Constant Test Linear Test Exponential Test Trig Test
.25 83 83 75 94
.5 80 83 78 79
1 85 86 68 80
2 74 71 66 69
4 86 87 78 85
Avg #: 81.6 82.0 73.0 81.4
% Improvement 0% -0.5% 10.5% 0.2%

Powell's Quartic Function

Size Factor Constant Test Linear Test Exponential Test Trig Test
.25 290 220 240 254
.5 280 220 220 249
1 194 288 281 248
2 180 240 191 214
4 241 216 245 249
Avg #: 237.0 236.8 235.4 242.8
% Improvement 0% 0.0% 0.7% -2.5%

Wood's Function

Size Factor Constant Test Linear Test Exponential Test Trig Test
.25 391 313 258 212
.5 344 329 272 249
1 315 349 283 373
2 234 206 184 203
4 408 397 656 495
Avg #: 338.4 318.8 330.6 306.4
% Improvement 0% 5.8% 2.3% 9.5%

Figure 3: Results for the Amoeba Algorithm with Varied Contraction Factor for Several Test Functions

     * Number of trials reached maximum allowed. Not factored into overall average, as the algorithm did not terminate properly. This is most likely due to repetition of the same points without termination.

     As referenced in the tables above, the exponential test provides the best results. By varying the contraction factor according to an exponential curve, the algorithm needed fewer tests to find the same minimum. Thus I propose that by varying the contraction factor in this method, one can improve upon the amoeba algorithm. It should once again be noted that the constants play a major role in the results of the algorithm - and thus I next attempted to use the exponential test results with varied constants. I first ran tests to see if the square root function were optimal, or if another root would provide even better results. Although the results varied greatly, I determined that the square root function was indeed the optimal solution for the tests. I similarly varied the maximum number of trials, of which the exponential curve is dependent on. Although it would be extremely time consuming to exhaustively test the combination of the exponential variable and the number of trials variable for each function/for each alpha value, I came to find that the exponential value of 0.5 and the maximum number of 2500 returned positive results, as can be seen in the tables. More results are forthcoming in a search to improve upon the exponential algorithm.

     Another way in which I attempted to improve upon the exponential factor results is to add some randomization into the results. Instead of following the exponential curve, I multiplied the contraction factor by a variable determined by a gaussian probability. This variation, in turn, led to several new constants to test, namely the amplitude of of the probability and the mean of the gaussian function. After a few initial tests, it became clear that large variations from the exponential curve resulted in disastrous results, most likely due to the fact that abnormally large contractions could have occurred in a undesireable location. I am currently continuing to examine this possiblity with small amplitudes on the order of 0.01 and less.

Return to Table of Contents


III. References

1Kelley, C.T., "Detection and Remediation of Stagnation in the Nelder-Mead Algorithm Using a Sufficient Decrease Condition," SIAM Journal on Optimization, SIAM, Philadelphia, 10(1999), pp. 43-54.

2Barton, Russell R. and John S. Ivey, Jr., "Nelder-Mead Simplex Modifications for Simulation Optimization," Management Science, Institute of Management Sciences, Providence, R.I., 42(1996), pp. 954-973.

3More, Jorge J., Burton S. Garbow, and Kenneth E. Hillstrom, "Testing Unconstrained Optimization Software," ACM Transactions on Mathematical Software, ACM, 7(1981), pp. 17-41.

4Press, William H., Numerical Recipies in C, Cambridge University Press, Cambridge, N.Y., 1995, pp. 408-412.

5Trabia, Mohamed B. and Xiao Bin Lu, "A Fuzzy Adaptive Simplex Seartch Optimization Algorithm," 25th Design Automation Conference, ASME, Las Vegas, 1999, DETC99/DAC-8586.

Return to Table of Contents


Last Updated July 18, 2001 by Todd Dolinsky