2007-2008 Supercomputing Challenge New Mexico Supercomputing Challenge
I
I
I
I
I
I


Registration

Proposals
Interims
Final Reports

Dates

Kickoff
Evaluations
Expo
STI
Wiki

School Map
Sponsors

Mail

Challenge
Technical Guide



Past Participant Survey

GUTS

 

Challenge Team Interim Report


[Challenge Logo]

    Team Number: 028

    School Name: Alamogordo High School

    Area of Science: Computer Science

    Project Title: Identification of Objects in Real Images

Abstract
Interim
Final Report

I hope to create a program that will be able to recognize objects from the real world in actual photographs when given the photograph and the coordinates of a point within the region of the object in the photograph to be recognized. During the training phase, photographs containing the objects to be distinguished and the coordinates of several points within this object will be entered into the training program for calculation of the appropriate synaptic weights. During the execution phase, several pictures containing examples of the objects which are to be recognized, at least one for every object present in training for which a point was specified, will be entered with the coordinates of a point located somewhere within the object, and in this way the success of the training will be determined. This program should be able to discern at least 500 different objects, some of which appear similar when photographed and some of which appear completely different.

At this stage, the equations for the training program and the program to translate the neural network into program code have been gathered, and the programming process has begun. The neural network being used in this project is a multi-layer feed-forward error-back-propagation neural network. Simply to run the network, the following equation is used:

For training, the network is executed using the equation above. Next, the error values are calculated as follows:

After this, the learning adjustments are calculated:

In these equations, alpha is a learning constant, W is a synaptic weight, E is error, d is a datum from the training data, and t is time (in epochs) spent so far training the network.

During learning, the error value will start out high. Generally, it will start learning at a slow rate until a certain point, at which learning accelerates itself as it settles into the correct pattern. All of this, however, is affected by both the alpha and eta learning rate parameters. If alpha is too low, the network will learn slowly, if at all. If alpha is too high, the output will oscillate around the correct values, without ever decreasing E to an acceptable level. Eta can have similar effects--oscillating around the correct values if too high, lack of converging to the correct values if too low, or converging slowly. The best values, however, cannot be calculated and therefore must be estimated and corrected if they don't work.

Of these equations, only the first is used after the network has been trained. It will be used in a program that outputs a C program to perform the same job as the original network, but without the learning ability.


Team Members

Team Mail

Sponsoring Teachers

Project Advisor(s)

  • Team mentor: Barak A. Pearlmutter (home page)
For questions about the Supercomputing Challenge, a 501(c)3 organization, contact us at: consult1516 @ supercomputingchallenge.org

New Mexico Supercomputing Challenge, Inc.
80 Cascabel Street
Los Alamos, New Mexico 87544
(505) 667-2864

Flag Counter

Tweet #SupercomputingChallenge