ABSTRACT
Optimization is the process of adjusting
the mathematical process of experimental to find the minimum or maximum result.
The neural network is the huge optimization technique, which is used to solve
the conceptual ideas of Artificial intelligence. The process can help experts
to solve difficult analysis, problem, which were very complex. While
considering this paper it deals with the job shop scheduling, whose ultimate
aim is to reduce the idle time. The scheduling was usually solved by means of
JOHNSON RULE. Finding optimal solutions for job shop scheduling problem
requires a high computational effort, especially under consideration of
uncertainty and frequent planning. In contrast to computational solutions,
domain experts are often able to derive good local dispatching heuristics
looking at typical problem instances. Though there are many number of
sequential solutions are available for N- JOBS M- MACHINES of nearly (n!)^m,
where ‘n’ represents no of jobs and m
represents no of machines , which could not carried out manually . Therefore,
we move on to the neural networks to find the optimal solution with in short
span of time.
INTRODUCTION
Artificial
intelligence may be generally defined as a processing of an activity without
the human intervention in each stage. It is emerges with a name called
Artificial network at 1940’s. There came a close relationship between the
neural networks with that human brain. Thereafter with the slow research, they
are developed by means of 1985’s.
An artificial
intelligence is machine that produces through time and enrolling collection of
symbol structures. In contrast to the symbolic approach, the neural networks
approach adopts the brain metaphor.
The long term –
knowledge of neural networks is encoded as a set of weights on connection
between the units. For this reason, the neural network architecture has also
been dubbed the connectionist.
OPTIMIZATION
Optimization is a
process of making something better. Optimization consists of trying variations
on an initial concept and using the information gained to improve on the idea.
Optimization is process of adjusting the inputs or characteristics of a device,
mathematical process, or experiment to find the minimum or maximum output. The
input consists of parameters; the process or function is known as cost
function, adjective function or fitness function; and output is the cost or
fitness. If the process is an experiment, then the parameters are physical inputs
to the experiment.
CATEGORIES OF OPTIMIZATION
Trial and error
optimization refers to the process of adjusting parameters that affect the
output with put knowing much about the process that produces the output.
When there is only
one parameter, the optimization is one-dimensional. The problem having more
than one parameter requires multidimensional optimization.
Dynamic optimization
means that output is function of time. While static means that, the output is
independent of time.
Optimization can
also be distinguished by either discrete or continuous parameters. Discrete
parameters have only a finite number of possible values, whereas continuous
have an infinite number of possible values.
Parameters often
have limits or constraints. Constrained optimization incorporates parameters
equalities and inequalities into the cost function. Unconstrained optimization
allows the parameter to take any value.
BASIC MODELS OF NEURAL NETWORK
There are three
types of network model,
1.
Feed forward
network,
2.
Back propagation network,
3.
Counter propagation
network.
REPRESENTATION OF NEURAL NETWORK
A neural network is
represented by a set of nodes & arrows. A node corresponds to a neuron ‘a’,
arrow, and ‘x’ an arrow corresponds to a connection along with the direction of
signal flow between the neurons.
BASIC NEURAL COMPUTATIONAL MODEL
The intelligence of
a neural network emerges from the collective behavior of neurons, each of which
performs only very limited operations each individual neurons work slowly.
NETWORK PROPERTIES
Construction of a
neural network involves
Determination of the
network properties such as the network topology, the types of connections,
order of connection & weight range.
Determination of the
activation range and activation function.
Determination of the
system dynamics, the weight initialization scheme, the activation – calculating
formula &a learning rule.
LAYERS OF NEURAL NETWORK
Input layer
Hidden layer
Output layer
A connection between
the nodes in different layer is called interlayer connection.
A connection between
the nodes in same layer is called interlayer connection.
A connection between
the nodes in distant layer is called separate connection.
A connection
pointing form the itself is called self- connection
A connection that
combines from more than one node is often by multiplication is known as higher
order connection.
INFERENCE AND LEARNING
An artificial
intelligence system is based on the neural network approach will generally
involves the following steps
Selects a suitable
neural network model based on the nature of the problem
Connects a neural
network according to the character of the domain
Trains the neural
network for making inference or solving the problems
If the conditions
are satisfied then move to the next steps.
TYPE OF NETWORK MODEL ENGAGED
Here BPN is used to
solve the optimization problem. We can obtain a better result by using this
network model, but not the best one.
THE BACK PROPAGATION NEURAL NETWORK
The back propagation
neural network is one of the most important historical developments in
neurocomputing. A powerful mapping network has been successfully applied to a
wide variety of problems ranging from application scoring to image compression.
ARCHITECTURE OF THE BACK PROPAGATION NETWORK
The back propagation
neural network architecture is a hierarchical design consisting of fully
interconnected layers or rows of processing units. The information processing
operation that back propagation networks are intended to carry out is the
approximation of a bounded mapping or function
f:Rn à Rm, from a compact subset A
of n- dimensional Euclidean space to
a bounded subset f[A] of M-dimensional Euclidean space, by means of training on examples (x1,y1),(x2,y2),,,,,,
of the mapping, where yk = f(xk).
As always, it will be assumed that such examples of a mapping f are generated by selecting xk
vectors randomly from A in accordance
with a fixed probability density function. The operational use to which the network is to be put after
training is also assumed to involve random selections of input vectors X in
accordance with. The basic single node BPN is shown below
In general, the
architecture consists of K rows of processing units, numbered from the bottom
up beginning with 1. For simplicity, the row and layer will be used
interchangeably, even though each row will actually turn out to consist of two
heterogeneous layers or slabs. The first layer consist of n fan-out processing
elements that simply accept the individual components xi of the
input vector X and distribute them, without modification, to all of the units
of the second row. Each unit on each row receives the output signal of each of
the units of the row below. This continues through all of the rows of the
network until the final row. The final (Kth) row of the network
consist of m- units and produces the network’s estimate yt of the
correct output vector y. for the purpose of this section it always be assumed
that K>=3. Rows 2 through K-1 are called hidden rows.
Besides the feed
forward connections mentioned above, each unit of each hidden row receives an
“error feedback” connection from each of the units above it. However, as will
be seen below, these are not merely fanned out copies of a broadcast output,
but are each separate connection, each carrying a different signal. Note that
each unit is composed of a single sun-processing
element and several planet-processing
elements. Each planet produces an output signal that is distributed to both its
sun and to the sun of the previous layer that supplied input to it. Each planet
receives input from one of the suns receives input from one of the planets of
each of the suns on the next higher row.
A single layer
network of logic having ‘R’ inputs is described here. They have one or more
hidden layers of sigmoid neurons followed by output layer of linear neurons.
The linear output layer lets the network to produce the values out side the
range of -1 to +1 on the other hand if the output is between 0 and 1 then use a
sigmoid function. The function simuff takes the network input P and the weights
W and bias b, up to three layers.
SOURCE CODE
The source code is
written in C++ language. This can be done by any of the high-end languages, but
C is a language that provides us with a better result than any other. Here BPN
architecture is a standard one and the rest of the program deals with the
application part. Here w, hw, out, rnd, alpha is the variables used to execute
the problem well. Array is created in order to obtain the values and to
evaluate. The foremost purpose of this structure is to initialize values.
Weights are initialized using the get time function. Here the priority is
important and it acts according to it. The error reduction value is set in the
program itself and the iteration depends over it.
SIMPLE EXAMPLE, ERROR REDUCTION, and SOLUTION
The above given
chart is an industrial data. It indicates a data of five machines for seven
days. The idle time for every machine varies and we can find the idle time is
higher which is not worth. Therefore, it indicates the improper scheduling of
machines. It is scheduled here by using neural network. A sample data of error
reduction is shown here.
ERROR
REDUCTION:
0.000018
0.000007 0.000004 0.008282 0.009324 0.000003 0.000002 0.000008
0.000018
0.000007 0.000004 0.008281 0.009324 0.000003 0.000002 0.000008
SOLUTION
A --> D
--> E --> B --> C -->
CONCLUSION
In this paper, we
made an attempt on neural network that shows promising results for solving the
general job-scheduling problem. Depending on the nature of the application and
the strength of the internal data patterns, you can generally expect a network
to train quite well. This applies to problems where the relationships may be
quite dynamic or non-linear. Ann’s provide an analytical alternative to
conventional techniques, which are often limited by strict assumptions of
normality, linearity, variable independence etc. Because an ANN can capture
many kinds of relationships it allows the user to quickly and relatively easily
model phenomena which otherwise may have been very difficult or impossible to
explain otherwise.
REFERENCES
1.
Introduction to artificial neural
system by Jacek M.Zurada
2.
Back propagation algorithm by Washer
Man.
3.
Let us c++ by Yashavant Kanitkar.
4.
Neuro computing by Robert Hecht-
Nielsen.
5.
C++ Neural networks & Fuzzy logic
by Dr.Valluru B. Rao, Hayagriva V. Rao.
6.
Elements of production planning and
control by Samuel Eilon.
7. Time Optimization Using Neural Network,
B.E. Project externally guided by Mr. Selladurai PhD, Coimbatore Institute of
Technology, Coimbatore.
EmoticonEmoticon