Executive Summary
      Problem Statement
      Solution Method
      Loading Method
      Results
      Flowchart
      Figures (and Grounds)
      Code Documentation
      Code
      References
Executive Summary
          We set out to computationally model a two-dimensional cargo hold and find adequate methods for loading any given set of rectangular packages. Due to the problem's computational complexity (it constitutes an instance of the knapsack problem and is documented as NP-hard(1,7)), it is necessary to approximate loading methods. Our program devises "good" loading sequences (defined as those that are acceptably fast, load as many packages as possible, and balance the center of mass in the center of the hold) comprised of percent chances to use one of several computationally simple loading methods.
          The program first populates a list of packages with lengths, widths, and masses. From these characteristics it determines characteristics such as density and "squariness," or the closeness of a package's dimensions to those of a square package. It then generates a selforganizing map, or SOM, a type of neural network used as a search algorithm. A self-organizing map contains a number of output nodes and a few input nodes, each with certain attributes. In our case, node attributes are percent-chances to use one of several computationally simple loading methods. A package may be loaded using one of several different "greedy approximations," which prioritize packages in accordance with their characteristics. A greedy approximation by mass, for example, first loads the heaviest package, then the next heaviest, and so on. Each node, then, stores five percent-chances from 0 to 100: greedy approximations by mass, area, density, and squariness, and a chance to pick an unloaded package at random and load it.
Problem Statement
          It was our intention to develop a program capable of generating loading sequences for any set of rectangular packages into a rectangular hold, optimizing for speed, fit, and balance. However, when we want to fit any set of objects into a limited space, we encounter the "Knapsack problem,"(1,7) a classic mathematical problem that is notoriously difficult to solve computationally. The Knapsack problem is documented as NP-Hard, meaning that it is at least as hard as every problem in the set NP. NP is a set containing all problems which can be solved in polynomial time by a non-deterministic Turing machine. Problems solvable in polynomial time are those which are solvable in as many steps as it takes to solve a polynomial of the "problem size," or size of the input; in other words, these are problems that can be solved "fast."(2,5) Turing machines are computational "black boxes" which process input and produce output. In a deterministic Turing machine, one input is always mapped to exactly one output. In a non-deterministic Turing machine, this is not necessarily the case: some other factor affects the output.(6) It becomes necessary, because of the computational complexity of the Knapsack problem, to use a dynamic programming approach.
Solution Method
          Dynamic programming refers to the branch of programming that deals with finding reductions of complex problems; that is, dynamic solutions entail solving small divisions of a problem which, when combined in a significant way, yield the solution to the problem as a whole. (8) In our case, we are interested in approximating "good" loading methods in a Monte Carlo fashion: by randomly generating our initial SOMs and using probabilities to generate a mix of loading methods, we achieve a stochastic solution to this complex problem. We use an algorithm called a Self-Organizing Map, or SOM, to approach desirable loading methods.
          SOMs first randomly generate the characteristics for an array of nodes laid out in two or more dimensions. They then compare these characteristics to the characteristics of several inputs. The node which is most similar to an input node is called that input node's best matching unit, or BMU. Once the BMU for an input has been found, the SOM updates all nodes by the following formula, making nodes within a certain radius more similar to the BMU. As a result, the SOM becomes more similar to the inputs with each iteration. For each input characteristic value X at iteration n,
θ is the "neighborhood function," a hyperbolically decreasing value that describes the distance from the BMU within which it is acceptable to update. α is a monotonically decreasing function that reduces the magnitude of the update as time progresses. D is the Euclidean distance
() between the node in question and the BMU. In our case, θ(x) is given by
where a is the width (in nodes) minus three.
α(x) is given by
          We use this type of hyperbolic function because the learning coefficient must decrease monotonically and never be less than zero. If the learning coefficient were to drop below zero, the changes would have the opposite of the intended effect and would begin to result in massive changes as the magnitude of the coefficient increased. The hyperbolic function has an asymptote which can be easily positioned. We started using a simple hyperbolic function with the asymptote at zero and then began modifying the asymptote so that the learning coefficient did not become so low that changes became insignificant, but also did not stay so high that the SOM never stabilized. The tailoring of the learning function was accomplished during the development of our Red, Green, Blue (RGB) SOM. We used the RGB SOM to become familiar with SOMs because we had an idea of what the results should be and therefore could gauge the correctness of our SOM algorithm. The resulting function has an asymptote at 0.5. Figures 1 through 3 show some results of our initial work with the RGB SOM.
          Using these formulas, we update a random set of nodes with a random set of inputs containing the same type of data as the nodes. Our node characteristics represent percentage chances to use one of several computationally-simple loading methods. It is a simple matter to find, for example, the heaviest package, to load it first, and then to load the next heaviest package and so on. This is known as a greedy approximation by mass. Each node in our SOM stores five characteristics: the percent chance to greedily approximate by size, the percent chance to greedily approximate by mass, the percent chance to greedily approximate by density, the percent chance to greedily approximate based on the similarity of the package shape to a square, and the percent chance to pick a package at random from the list of unloaded packages and load it. These characteristics comprise what we identify as a "loading method." Each loading method has a probability assigned to it such that the probabilities for all methods add up to 100. We then select loading methods by their probabilities in order to create a unique and computationally simple combination of the less desirable greedy approximations. At each iteration, we test our nodes in this way to determine the "fitness" of each loading method, or an average of speed, fit, and balance. The few, the proud, the five nodes with the greatest fitness are used as the inputs for the next iteration. We repeat this process until the improvement in fitness from step to step is very slight, at which point we halt execution and output the loading method with the greatest fitness.
Loading simulation
          In order to evaluate loading sequences, we must be able to simulate their results. To do this, we simulate the loading of the packages using what we call the "Tetris" method. We place our first package at the coordinates (0,0) and then continue to load packages along the increasing x-axis such that they're vertical edges are aligned. After selecting the x position, we check along the side of the package closest to y = 0 to see if it is closer to y = 0 than any of the previously loaded packages that could intersect it. If any packages are detected meeting those conditions, the package is moved such that it can no longer intersect. As a result, the hold loads much like a game of Tetris: the packages stack on top of each other and "drop" into the hold from left to right. During the loading process, if any package exceeds the boundaries of the hold, it is marked as unloadable. In our visualization, the y-axis increases toward the bottom of the screen and the x-axis increases toward the right, so the first package appears at the top left corner of the hold and the other packages "fall" upward from left to right. A sample of our loading visualization appears in Figure 4. In Figure 4, a set of 75 packages has been loaded into a hold by size. It was sequenced using probabilities of 0 for all methods except greedy approximation by size. The brown packages were all loaded successfully and the red packages did not fit. The white dot near the center shows the center of mass and the diagonal lines increase in length for packages loaded later in the loading sequence. In this load, the center of mass would have been favorable, but the packages haven't all been loaded, so the sequence is not sufficient. We used the SDL10 (Simple DirectMedia Layer) API to produce our loading visualizations.
Results
          As of April 3, 2007, we have created a working SOM, which we initially created as the RGB (Red, Green, and Blue ) SOM used in our examples and adapted to function in our final implementation. We have also created functions to sequence the packages using data from the SOM, apply the "Tetris" loading simulation to those packages, and display the loaded hold with loaded and unloadable packages. Unfortunately, during the marriage of the loading algorithms and the SOM we introduced bugs. We are currently debugging the combination of the two and expect to have results on the 2D implementation of the full program by Expo. A flow chart of our code as it stands now is shown below.