Neural Networks and the Stock Market Pt. 2 – Network Implementation

See part 1 of the series here.

In the previous entry, I went through code that will go fetch stock price data for us. This entry will focus on the code that implements the neural network itself.

What exactly is a Neural Network?

The idea behind neural network algorithms is to take ‘inspiration’ from the way the brain works. The brain solves problems using clusters of neurons that are connected to one another. These different neurons will ‘fire’, or activate when some threshold is reached, and then that input is passed onto other neurons for consumption.

In attempting to loosely mimic this behaviour, I’ll be creating a network filled with nodes that represent our artificial neurons. What these nodes need to be able to do is:

  • Take in a variable number of inputs
  • Apply some weight to each input
  • Combine/do something with the inputs
  • Do something in response

Additionally, it’s also fairly standard for each of these neurons to have a bias.

Code for the neurons

So below is the beginning of code that implements these nodes:

import sys
import os
import random
import math
import numpy as np
class Node(object):
    def __init__(self,number_of_inputs):
        self.inputs  = number_of_inputs
        self.bias    = random.uniform(0.0,1.0)
        self.weights = np.array([random.uniform(0.0,1.0)] * number_of_inputs)
        self.output  = 0.0

    def output(self):
        return self.output

    def getWeightAtIdx(self,idx):
        return self.weights[idx]

    def getBias(self):
        return self.bias

    def debug_info(self):
        info =  "Bias: %f ; Weights:"%(self.bias)
        for w in self.weights:
            info += "%f," %(w)
        return info

So this is pretty straightforward. The constructor requires you to pass in a number of inputs going into the node, a NumPy array the length of the number of inputs is created for the weights, and the bias and weights are initialized to random values between 0 and 1.

The other functions allow you to get the values we care about for the node, namely it’s output, bias, and each of the weight values. The final function debug_info provides the bias and weight values, which will be nice to examine as I create and train the network to make sure that the weights/bias are changing as I would expect.

A note on using random values: Always seed the random number generator you use, and always save that seed. If you’re using a neural network to solve some problem, you really, really need to be able to reproduce results.

Activity and Activation Functions

With artificial neural networks, the ‘doing something with the inputs’ is referred to as calculating a node’s activity. How this is calculated is ulitmately up to you and usually depends on the type of problem. Since I think the problemat hand, stock forecasting, is probably either a regression or classification type problem, I’ll be using a linear basis function. So if the node has n inputs, this linear basis function is:

$Activity = bias + \sum_{i=1}^{n}weight_i * input_i$

If I were instead doing something like a cluster analysis, this linear basis function would be replaced with a radial basis function. Next, this activity value is fed into what’s called an activation function. Similar to calculating a node’s activity, the choice of activation function is pretty much up to you; it really depends on the problem at hand. The sort of ‘classic’ choice of activation function is the sigmoid (or logistic, soft step) function:

$f(x) = \frac{1}{1-e^{-x}}$

where $x$ is the activity value of the node. This function restrics the node to values between 0 and 1. In many cases, threshold logic is applied here, which gives each node an output of either 0 or 1. I will be skipping that, at least for now.

 def calculateActivity(self,input_vector):
    #linear basis function
    activity = self.bias
    activity += np.dot(input_vector,self.weights)
    return activity

def activationFunction(self,input_value):
    # Sigmoid Activation
    return 1.0/(1.0 + math.exp(-input_value))    

def calculate(self,input_vector):
    activity_value = self.calculateActivity(input_vector)
    self.output = self.activationFunction(activity_value)

So this implements both the activity calculation and the activation function, as well as a single helper function to calculate the output of the node.

Learning

This is all well and good, but for the network to learn, it needs to be able to adjust the weights associated with the inputs. Doing so is fairly straightforward at the node level; you merely add or subtract values from the weights depending on the parameters network, which I will be talking about shortly. But for now, here’s all the code for the Nodes in one place.

class Node(object):
    def __init__(self,number_of_inputs):
        self.inputs  = number_of_inputs
        self.bias    = random.uniform(0.0,1.0)
        self.weights = np.array([random.uniform(0.0,1.0)] * number_of_inputs)
        self.output  = 0.0

    def output(self):
        return self.output

    def debug_info(self):
        info =  "Bias: %f ; Weights:"%(self.bias)
        for w in self.weights:
            info += "%f," %(w)
        return info

    def getWeightAtIdx(self,idx):
        return self.weights[idx]

    def getBias(self):
        return self.bias

    def calculateActivity(self,input_vector):
        #linear basis function
        activity = self.bias
        activity += np.dot(input_vector,self.weights)
        return activity

    def activationFunction(self,input_value):
        # Sigmoid Activation
        return 1.0/(1.0 + math.exp(-input_value))    

    def calculate(self,input_vector):
        activity_value = self.calculateActivity(input_vector)
        self.output = self.activationFunction(activity_value)

    def updateWeights(self,alpha,delta):
        adjustment = self.output * delta * alpha
        self.bias = self.bias + adjustment
        self.weights = self.weights + adjustment

Networks

There are actually many types of neural networks: feed-foward, recurrent, and cellular to name a few. Additionally, you can create networks that combine multiple types of networks. For my purposes, I’ll be using a multi-layer, feed-foward network, which is nicely illustrated here. Basically in a feed forward network, input moves only one way through the network, as opposed to something like a recurrent network where neurons can form cycles.

So here’s the code that sets up the network:

class FeedForwardNet(object):
    def __init__(self,no_of_inputs,no_of_hidden_layers,nodes_in_hiddens,no_of_outputs,learning_rate):
        self.number_of_inputs        = no_of_inputs
        self.number_of_hidden_layers = no_of_hidden_layers
        self.hidden_nodes            = []
        self.hidden_outputs          = []
        self.hidden_nodes.append(np.array([Node(no_of_inputs) for x in range(nodes_in_hiddens[0])]))
        self.hidden_outputs.append(np.array([0.0 for x in range(nodes_in_hiddens[0])]))
        if no_of_hidden_layers > 1:
            for i in range(1,len(nodes_in_hiddens)):
                self.hidden_nodes.append(np.array([Node(nodes_in_hiddens[i-1]) for x in range(nodes_in_hiddens[i])]))
                self.hidden_outputs.append(np.array([0.0 for x in range(nodes_in_hiddens[i])]))

        self.hidden_node_list        = nodes_in_hiddens
        self.output_layer            = np.array([Node(nodes_in_hiddens[-1]) for i in range(no_of_outputs)])
        self.number_of_outputs       = no_of_outputs
        self.network_output          = np.array([0.0 for i in range(no_of_outputs)])
        self.errors                  = np.array([0.0 for i in range(no_of_outputs)])
        self.alpha                   = learning_rate

    def getNetOutputs(self):
        return self.network_output

    def debug_info(self):
        print "Number of Inputs: ", self.number_of_inputs
        print "Number of Hidden Nodes: ", self.hidden_node_list
        print "Number of Outputs: ", self.number_of_outputs

        print "Hidden Layer Node Weights:"
        count = 1
        for layer in self.hidden_nodes:
            print "Hidden Layer",count,":"
            count +=1
            for node in layer:
                print node.debug_info()

        print "Ouput Layer Node Weights:"
        for node in self.output_layer:
            node.debug_info()

        print "Output from network:"
        print self.network_output
        print "Network Errors:"
        print self.errors

Basically what’s happening here is we’re creating all the nodes that will be in the network. In the first (hidden) layer, each node recieves input from each input into the network, so I create a list of nodes, each with the number of inputs as node inputs. Then for each hidden layer after that, I create a list of nodes that take input from each node in the previous layer. Finally, I create an output layer of nodes, which take in the inputs from the final hidden layer and produce the outputs of the network. Additionally, I create arrays for the outputs for each layer of the network, to make things easier on myself. I also create an array to hold the current error in the output of the network. Finally, I store the input parameter $\alpha$, which is the learning rate of the network.

Choosing an appropriate learning rate is somewhat important. What the learning rate does is scale how much the network will correct the weights of the nodes. Choosing a really small learning rate means that it may take you hours to train a network; choosing too large an alpha may mean you miss convergence to an optimal set of weights. I don’t think there’s really a hard and fast way to initially selecting a value for learning rate. There are some approaches that vary the learning rate, like simmulated annealing, but I’m going to leave it with a static value for now.

Why I’m Using Multiple Layers

There’s a reason I’ve decided to use a multi-layer network. It has to do with a single node neural network’s inability to solve the ‘exclusive or’ problem. There is a fairly good write up of this and illustration of how adding layers to the network gives it the ability to solve this problem here. Basically, the result of adding hidden layers is it allows the network to replicate any function. Since I have no idea what the underlying function (if one even exists) for the movement of a stock’s price, using multiple layers should give me the best chance of creating a reasonable model.

Feeding Foward and Back Propagating Errors

The next two things I need to do is implement the network producing an output (feeding forward) from an input, then how the network will ‘learn’ from that input. The process of calculating the networks output is pretty straightfoward, as shown below:

def FeedForward(self,input_vector,true_outputs=None,Training=False):

    for y in range(len(self.hidden_nodes)):
        layer  = self.hidden_nodes[y]
        output = self.hidden_outputs[y]
        for x in range (len(layer)):
            layer[x].calculate(input_vector)
            output[x] = layer[x].output
            input_vector = output
        hidden_output = self.hidden_outputs[-1]
        for x in range(self.number_of_outputs):            
            self.output_layer[x].calculate(hidden_output)
            self.network_output[x] = self.output_layer[x].output
            if Training:
                self.errors[x] = true_outputs[x] - self.network_output[x]

    if Training:
        self.BackPropagate()

    return self.network_output

The inputs in list form are the main input into this function. There are two optional arguments, the expected outputs from the inputs (in list form as well), and a flag for whether the network is being trained. If the Training is True, the function will compute the errors for each output, and then makes a call to a function called BackPropagate, which will be discussed shortly. The function then returns the networks outputs.

The approach I’ve chosen to take with the learning aspect of the network is to backpropagate the network errors, using what’s called the perceptron delta rule. This is based on the idea of using gradient descent to minimize errors in the network.

def BackPropagate(self):
    deltas_for_layer = []
    for i in range(self.number_of_outputs):
        output = self.network_output[i]
        delta_o = self.errors[i] * (output * (1.0-output))
        self.output_layer[i].updateWeights(self.alpha,delta_o)
        deltas_for_layer.append(delta_o)
    prev_layer = self.output_layer
    for y in range(len(self.hidden_nodes)):
        layer  = self.hidden_nodes[-(1+y)]
        prev_layer_factor = 0
        current_layer_deltas = []
        for j in range(len(layer)):
            output = layer[j].output
            for x in range(len(prev_layer)):
                prev_layer_factor += prev_layer[x].getWeightAtIdx(j) * deltas_for_layer[x]

            delta_h = (output * (1.0-output)) * prev_layer_factor
            current_layer_deltas.append(delta_h)
            layer[j].updateWeights(self.alpha,delta_h)
            prev_layer = layer            
            deltas_for_layer = current_layer_deltas

Basically, each node is being modified by the error in it’s output multiplied by the derivative of the activation function. This looks pretty straightforward for the output nodes, but looks a lot more complicated for the hidden layers. I don’t particularly feel like deriving those equations, so if you look here, that’s basically what I implemented (or tried to).

Like before with the node, here’s the whole code for the Feed Forward Network class:

class FeedForwardNet(object):
    def __init__(self,no_of_inputs,no_of_hidden_layers,nodes_in_hiddens,no_of_outputs,learning_rate):
        self.number_of_inputs        = no_of_inputs
        self.number_of_hidden_layers = no_of_hidden_layers
        self.hidden_nodes            = []
        self.hidden_outputs          = []
        self.hidden_nodes.append(np.array([Node(no_of_inputs) for x in range(nodes_in_hiddens[0])]))
        self.hidden_outputs.append(np.array([0.0 for x in range(nodes_in_hiddens[0])]))
        if no_of_hidden_layers > 1:
            for i in range(1,len(nodes_in_hiddens)):
                self.hidden_nodes.append(np.array([Node(nodes_in_hiddens[i-1]) for x in range(nodes_in_hiddens[i])]))
                self.hidden_outputs.append(np.array([0.0 for x in range(nodes_in_hiddens[i])]))

        self.hidden_node_list        = nodes_in_hiddens

        self.output_layer        = np.array([Node(nodes_in_hiddens[-1]) for i in range(no_of_outputs)])


        self.number_of_outputs       = no_of_outputs
        self.network_output          = np.array([0.0 for i in range(no_of_outputs)])
        self.errors                  = np.array([0.0 for i in range(no_of_outputs)])
        self.alpha                   = learning_rate

    def getNetOutputs(self):
        return self.network_output

    def debug_info(self):
        print "Number of Inputs: ", self.number_of_inputs
        print "Number of Hidden Nodes: ", self.hidden_node_list
        print "Number of Outputs: ", self.number_of_outputs

        print "Hidden Layer Node Weights:"
        count = 1
        for layer in self.hidden_nodes:
            print "Hidden Layer",count,":"
            count +=1
            for node in layer:
                print node.debug_info()

        print "Ouput Layer Node Weights:"
        for node in self.output_layer:
            print node.debug_info()

        print "Output from network:"
        print self.network_output
        print "Network Errors:"
        print self.errors


    def FeedForward(self,input_vector,true_outputs=None,Training=False):

        for y in range(len(self.hidden_nodes)):
            layer  = self.hidden_nodes[y]
            output = self.hidden_outputs[y]
            for x in range (len(layer)):
                layer[x].calculate(input_vector)
                output[x] = layer[x].output
            input_vector = output
        hidden_output = self.hidden_outputs[-1]
        for x in range(self.number_of_outputs):            
            self.output_layer[x].calculate(hidden_output)
            self.network_output[x] = self.output_layer[x].output
            self.errors[x] = true_outputs[x] - self.output_layer[x].output

        if Training:
            self.BackPropagate()
        else:
            return self.network_output

    def BackPropagate(self):
        deltas_for_layer = []
        for i in range(self.number_of_outputs):
            output = self.network_output[i]
            delta_o = self.errors[i] * (output * (1.0-output))
            self.output_layer[i].updateWeights(self.alpha,delta_o)
            deltas_for_layer.append(delta_o)
        prev_layer = self.output_layer
        for y in range(len(self.hidden_nodes)):
            layer  = self.hidden_nodes[-(1+y)]
            prev_layer_factor = 0
            current_layer_deltas = []
            for j in range(len(layer)):
                output = layer[j].output
                for x in range(len(prev_layer)):
                    prev_layer_factor += prev_layer[x].getWeightAtIdx(j) * deltas_for_layer[x]

                delta_h = (output * (1.0-output)) * prev_layer_factor
                current_layer_deltas.append(delta_h)
                layer[j].updateWeights(self.alpha,delta_h)
            prev_layer = layer            
            deltas_for_layer = current_layer_deltas

Some Test Cases

To make sure all is working as expected, I’ve run some test cases. Basically, I’m varying some network properties, like number of outputs and hidden layers, to make sure the networks learn properly. To do so, I’m giving the network a single input vector and true output vector, then training on that for several thousand iterations, to ensure convergence.

#no_of_inputs,no_of_hidden_layers,nodes_in_hiddens,no_of_outputs,learning_rate
test_network = FeedForwardNet(5,2,[7,10],1,0.1)


training_vector = [0.45,0.5,0.55,0.6,0.65]
training_output = [1.0]
test_network.debug_info()
for x in range(5000):
    test_network.FeedForward(training_vector,training_output,Training=True)
    if(x%1000 == 0):
        test_network.debug_info()


Number of Inputs:  5
Number of Hidden Nodes:  [7, 10]
Number of Outputs:  1
Hidden Layer Node Weights:
Hidden Layer 1 :
Bias: 0.446231 ; Weights:0.485652,0.485652,0.485652,0.485652,0.485652,
Bias: 0.305788 ; Weights:0.785681,0.785681,0.785681,0.785681,0.785681,
Bias: 0.239627 ; Weights:0.726404,0.726404,0.726404,0.726404,0.726404,
Bias: 0.061429 ; Weights:0.427995,0.427995,0.427995,0.427995,0.427995,
Bias: 0.793471 ; Weights:0.823827,0.823827,0.823827,0.823827,0.823827,
Bias: 0.549464 ; Weights:0.578656,0.578656,0.578656,0.578656,0.578656,
Bias: 0.645772 ; Weights:0.346040,0.346040,0.346040,0.346040,0.346040,
Hidden Layer 2 :
Bias: 0.470286 ; Weights:0.730483,0.730483,0.730483,0.730483,0.730483,0.730483,0.730483,
Bias: 0.747228 ; Weights:0.565473,0.565473,0.565473,0.565473,0.565473,0.565473,0.565473,
Bias: 0.889271 ; Weights:0.357468,0.357468,0.357468,0.357468,0.357468,0.357468,0.357468,
Bias: 0.888109 ; Weights:0.175147,0.175147,0.175147,0.175147,0.175147,0.175147,0.175147,
Bias: 0.242589 ; Weights:0.167396,0.167396,0.167396,0.167396,0.167396,0.167396,0.167396,
Bias: 0.643208 ; Weights:0.225440,0.225440,0.225440,0.225440,0.225440,0.225440,0.225440,
Bias: 0.102560 ; Weights:0.704646,0.704646,0.704646,0.704646,0.704646,0.704646,0.704646,
Bias: 0.584261 ; Weights:0.542339,0.542339,0.542339,0.542339,0.542339,0.542339,0.542339,
Bias: 0.625072 ; Weights:0.451780,0.451780,0.451780,0.451780,0.451780,0.451780,0.451780,
Bias: 0.802075 ; Weights:0.317931,0.317931,0.317931,0.317931,0.317931,0.317931,0.317931,
Ouput Layer Node Weights:
Bias: 0.175703 ; Weights:0.351520,0.351520,0.351520,0.351520,0.351520,0.351520,0.351520,0.351520,0.351520,0.351520,
Output from network:
[ 0.]
Network Errors:
[ 0.]
Number of Inputs:  5
Number of Hidden Nodes:  [7, 10]
Number of Outputs:  1
Hidden Layer Node Weights:
Hidden Layer 1 :
Bias: 0.446234 ; Weights:0.485655,0.485655,0.485655,0.485655,0.485655,
Bias: 0.305792 ; Weights:0.785685,0.785685,0.785685,0.785685,0.785685,
Bias: 0.239633 ; Weights:0.726411,0.726411,0.726411,0.726411,0.726411,
Bias: 0.061443 ; Weights:0.428009,0.428009,0.428009,0.428009,0.428009,
Bias: 0.793477 ; Weights:0.823832,0.823832,0.823832,0.823832,0.823832,
Bias: 0.549478 ; Weights:0.578670,0.578670,0.578670,0.578670,0.578670,
Bias: 0.645794 ; Weights:0.346063,0.346063,0.346063,0.346063,0.346063,
Hidden Layer 2 :
Bias: 0.470286 ; Weights:0.730483,0.730483,0.730483,0.730483,0.730483,0.730483,0.730483,
Bias: 0.747229 ; Weights:0.565474,0.565474,0.565474,0.565474,0.565474,0.565474,0.565474,
Bias: 0.889275 ; Weights:0.357472,0.357472,0.357472,0.357472,0.357472,0.357472,0.357472,
Bias: 0.888121 ; Weights:0.175159,0.175159,0.175159,0.175159,0.175159,0.175159,0.175159,
Bias: 0.242610 ; Weights:0.167417,0.167417,0.167417,0.167417,0.167417,0.167417,0.167417,
Bias: 0.643226 ; Weights:0.225457,0.225457,0.225457,0.225457,0.225457,0.225457,0.225457,
Bias: 0.102563 ; Weights:0.704648,0.704648,0.704648,0.704648,0.704648,0.704648,0.704648,
Bias: 0.584266 ; Weights:0.542344,0.542344,0.542344,0.542344,0.542344,0.542344,0.542344,
Bias: 0.625081 ; Weights:0.451789,0.451789,0.451789,0.451789,0.451789,0.451789,0.451789,
Bias: 0.802092 ; Weights:0.317947,0.317947,0.317947,0.317947,0.317947,0.317947,0.317947,
Ouput Layer Node Weights:
Bias: 0.175790 ; Weights:0.351607,0.351607,0.351607,0.351607,0.351607,0.351607,0.351607,0.351607,0.351607,0.351607,
Output from network:
[ 0.96962312]
Network Errors:
[ 0.03037688]
Number of Inputs:  5
Number of Hidden Nodes:  [7, 10]
Number of Outputs:  1
Hidden Layer Node Weights:
Hidden Layer 1 :
Bias: 0.447994 ; Weights:0.487414,0.487414,0.487414,0.487414,0.487414,
Bias: 0.308005 ; Weights:0.787898,0.787898,0.787898,0.787898,0.787898,
Bias: 0.243558 ; Weights:0.730335,0.730335,0.730335,0.730335,0.730335,
Bias: 0.070416 ; Weights:0.436982,0.436982,0.436982,0.436982,0.436982,
Bias: 0.796873 ; Weights:0.827229,0.827229,0.827229,0.827229,0.827229,
Bias: 0.557828 ; Weights:0.587020,0.587020,0.587020,0.587020,0.587020,
Bias: 0.659235 ; Weights:0.359503,0.359503,0.359503,0.359503,0.359503,
Hidden Layer 2 :
Bias: 0.470418 ; Weights:0.730615,0.730615,0.730615,0.730615,0.730615,0.730615,0.730615,
Bias: 0.747770 ; Weights:0.566015,0.566015,0.566015,0.566015,0.566015,0.566015,0.566015,
Bias: 0.891573 ; Weights:0.359769,0.359769,0.359769,0.359769,0.359769,0.359769,0.359769,
Bias: 0.895311 ; Weights:0.182350,0.182350,0.182350,0.182350,0.182350,0.182350,0.182350,
Bias: 0.255306 ; Weights:0.180113,0.180113,0.180113,0.180113,0.180113,0.180113,0.180113,
Bias: 0.653494 ; Weights:0.235725,0.235725,0.235725,0.235725,0.235725,0.235725,0.235725,
Bias: 0.104103 ; Weights:0.706188,0.706188,0.706188,0.706188,0.706188,0.706188,0.706188,
Bias: 0.587134 ; Weights:0.545212,0.545212,0.545212,0.545212,0.545212,0.545212,0.545212,
Bias: 0.630242 ; Weights:0.456949,0.456949,0.456949,0.456949,0.456949,0.456949,0.456949,
Bias: 0.811998 ; Weights:0.327853,0.327853,0.327853,0.327853,0.327853,0.327853,0.327853,
Ouput Layer Node Weights:
Bias: 0.225880 ; Weights:0.401697,0.401697,0.401697,0.401697,0.401697,0.401697,0.401697,0.401697,0.401697,0.401697,
Output from network:
[ 0.98196423]
Network Errors:
[ 0.01803577]
Number of Inputs:  5
Number of Hidden Nodes:  [7, 10]
Number of Outputs:  1
Hidden Layer Node Weights:
Hidden Layer 1 :
Bias: 0.448905 ; Weights:0.488325,0.488325,0.488325,0.488325,0.488325,
Bias: 0.309149 ; Weights:0.789042,0.789042,0.789042,0.789042,0.789042,
Bias: 0.245579 ; Weights:0.732356,0.732356,0.732356,0.732356,0.732356,
Bias: 0.075033 ; Weights:0.441599,0.441599,0.441599,0.441599,0.441599,
Bias: 0.798622 ; Weights:0.828977,0.828977,0.828977,0.828977,0.828977,
Bias: 0.562091 ; Weights:0.591283,0.591283,0.591283,0.591283,0.591283,
Bias: 0.666077 ; Weights:0.366345,0.366345,0.366345,0.366345,0.366345,
Hidden Layer 2 :
Bias: 0.470487 ; Weights:0.730684,0.730684,0.730684,0.730684,0.730684,0.730684,0.730684,
Bias: 0.748052 ; Weights:0.566296,0.566296,0.566296,0.566296,0.566296,0.566296,0.566296,
Bias: 0.892763 ; Weights:0.360959,0.360959,0.360959,0.360959,0.360959,0.360959,0.360959,
Bias: 0.898992 ; Weights:0.186031,0.186031,0.186031,0.186031,0.186031,0.186031,0.186031,
Bias: 0.261803 ; Weights:0.186610,0.186610,0.186610,0.186610,0.186610,0.186610,0.186610,
Bias: 0.658686 ; Weights:0.240917,0.240917,0.240917,0.240917,0.240917,0.240917,0.240917,
Bias: 0.104899 ; Weights:0.706984,0.706984,0.706984,0.706984,0.706984,0.706984,0.706984,
Bias: 0.588609 ; Weights:0.546687,0.546687,0.546687,0.546687,0.546687,0.546687,0.546687,
Bias: 0.632872 ; Weights:0.459580,0.459580,0.459580,0.459580,0.459580,0.459580,0.459580,
Bias: 0.816963 ; Weights:0.332818,0.332818,0.332818,0.332818,0.332818,0.332818,0.332818,
Ouput Layer Node Weights:
Bias: 0.250044 ; Weights:0.425861,0.425861,0.425861,0.425861,0.425861,0.425861,0.425861,0.425861,0.425861,0.425861,
Output from network:
[ 0.98602059]
Network Errors:
[ 0.01397941]
Number of Inputs:  5
Number of Hidden Nodes:  [7, 10]
Number of Outputs:  1
Hidden Layer Node Weights:
Hidden Layer 1 :
Bias: 0.449529 ; Weights:0.488949,0.488949,0.488949,0.488949,0.488949,
Bias: 0.309931 ; Weights:0.789825,0.789825,0.789825,0.789825,0.789825,
Bias: 0.246960 ; Weights:0.733738,0.733738,0.733738,0.733738,0.733738,
Bias: 0.078187 ; Weights:0.444753,0.444753,0.444753,0.444753,0.444753,
Bias: 0.799816 ; Weights:0.830172,0.830172,0.830172,0.830172,0.830172,
Bias: 0.564991 ; Weights:0.594183,0.594183,0.594183,0.594183,0.594183,
Bias: 0.670721 ; Weights:0.370989,0.370989,0.370989,0.370989,0.370989,
Hidden Layer 2 :
Bias: 0.470535 ; Weights:0.730732,0.730732,0.730732,0.730732,0.730732,0.730732,0.730732,
Bias: 0.748245 ; Weights:0.566490,0.566490,0.566490,0.566490,0.566490,0.566490,0.566490,
Bias: 0.893579 ; Weights:0.361775,0.361775,0.361775,0.361775,0.361775,0.361775,0.361775,
Bias: 0.901500 ; Weights:0.188539,0.188539,0.188539,0.188539,0.188539,0.188539,0.188539,
Bias: 0.266224 ; Weights:0.191031,0.191031,0.191031,0.191031,0.191031,0.191031,0.191031,
Bias: 0.662198 ; Weights:0.244429,0.244429,0.244429,0.244429,0.244429,0.244429,0.244429,
Bias: 0.105444 ; Weights:0.707529,0.707529,0.707529,0.707529,0.707529,0.707529,0.707529,
Bias: 0.589617 ; Weights:0.547695,0.547695,0.547695,0.547695,0.547695,0.547695,0.547695,
Bias: 0.634660 ; Weights:0.461368,0.461368,0.461368,0.461368,0.461368,0.461368,0.461368,
Bias: 0.820306 ; Weights:0.336162,0.336162,0.336162,0.336162,0.336162,0.336162,0.336162,
Ouput Layer Node Weights:
Bias: 0.266040 ; Weights:0.441857,0.441857,0.441857,0.441857,0.441857,0.441857,0.441857,0.441857,0.441857,0.441857,
Output from network:
[ 0.98820188]
Network Errors:
[ 0.01179812]
Number of Inputs:  5
Number of Hidden Nodes:  [7, 10]
Number of Outputs:  1
Hidden Layer Node Weights:
Hidden Layer 1 :
Bias: 0.450006 ; Weights:0.489426,0.489426,0.489426,0.489426,0.489426,
Bias: 0.310530 ; Weights:0.790423,0.790423,0.790423,0.790423,0.790423,
Bias: 0.248014 ; Weights:0.734791,0.734791,0.734791,0.734791,0.734791,
Bias: 0.080592 ; Weights:0.447158,0.447158,0.447158,0.447158,0.447158,
Bias: 0.800728 ; Weights:0.831083,0.831083,0.831083,0.831083,0.831083,
Bias: 0.567196 ; Weights:0.596388,0.596388,0.596388,0.596388,0.596388,
Bias: 0.674247 ; Weights:0.374516,0.374516,0.374516,0.374516,0.374516,
Hidden Layer 2 :
Bias: 0.470571 ; Weights:0.730768,0.730768,0.730768,0.730768,0.730768,0.730768,0.730768,
Bias: 0.748394 ; Weights:0.566638,0.566638,0.566638,0.566638,0.566638,0.566638,0.566638,
Bias: 0.894203 ; Weights:0.362400,0.362400,0.362400,0.362400,0.362400,0.362400,0.362400,
Bias: 0.903410 ; Weights:0.190448,0.190448,0.190448,0.190448,0.190448,0.190448,0.190448,
Bias: 0.269587 ; Weights:0.194394,0.194394,0.194394,0.194394,0.194394,0.194394,0.194394,
Bias: 0.664858 ; Weights:0.247090,0.247090,0.247090,0.247090,0.247090,0.247090,0.247090,
Bias: 0.105861 ; Weights:0.707946,0.707946,0.707946,0.707946,0.707946,0.707946,0.707946,
Bias: 0.590387 ; Weights:0.548465,0.548465,0.548465,0.548465,0.548465,0.548465,0.548465,
Bias: 0.636020 ; Weights:0.462728,0.462728,0.462728,0.462728,0.462728,0.462728,0.462728,
Bias: 0.822831 ; Weights:0.338687,0.338687,0.338687,0.338687,0.338687,0.338687,0.338687,
Ouput Layer Node Weights:
Bias: 0.277994 ; Weights:0.453811,0.453811,0.453811,0.453811,0.453811,0.453811,0.453811,0.453811,0.453811,0.453811,
Output from network:
[ 0.98961156]
Network Errors:
[ 0.01038844]
#no_of_inputs,no_of_hidden_layers,nodes_in_hiddens,no_of_outputs,learning_rate
test_network = FeedForwardNet(5,2,[7,10],2,0.1)

test_network.debug_info()


training_vector = [0.45,0.5,0.55,0.6,0.65]
training_output = [1.0,0.5]


for x in range(5000):
    test_network.FeedForward(training_vector,training_output,Training=True)
    if(x%1000 == 0):
        test_network.debug_info()


test_network.debug_info()
Number of Inputs:  5
Number of Hidden Nodes:  [7, 10]
Number of Outputs:  2
Hidden Layer Node Weights:
Hidden Layer 1 :
Bias: 0.582275 ; Weights:0.094837,0.094837,0.094837,0.094837,0.094837,
Bias: 0.742894 ; Weights:0.531346,0.531346,0.531346,0.531346,0.531346,
Bias: 0.424033 ; Weights:0.436275,0.436275,0.436275,0.436275,0.436275,
Bias: 0.708450 ; Weights:0.097709,0.097709,0.097709,0.097709,0.097709,
Bias: 0.169011 ; Weights:0.100227,0.100227,0.100227,0.100227,0.100227,
Bias: 0.553225 ; Weights:0.104048,0.104048,0.104048,0.104048,0.104048,
Bias: 0.521774 ; Weights:0.095186,0.095186,0.095186,0.095186,0.095186,
Hidden Layer 2 :
Bias: 0.633512 ; Weights:0.262039,0.262039,0.262039,0.262039,0.262039,0.262039,0.262039,
Bias: 0.138068 ; Weights:0.687594,0.687594,0.687594,0.687594,0.687594,0.687594,0.687594,
Bias: 0.250054 ; Weights:0.127245,0.127245,0.127245,0.127245,0.127245,0.127245,0.127245,
Bias: 0.493281 ; Weights:0.496845,0.496845,0.496845,0.496845,0.496845,0.496845,0.496845,
Bias: 0.096920 ; Weights:0.070001,0.070001,0.070001,0.070001,0.070001,0.070001,0.070001,
Bias: 0.041055 ; Weights:0.447143,0.447143,0.447143,0.447143,0.447143,0.447143,0.447143,
Bias: 0.364872 ; Weights:0.921218,0.921218,0.921218,0.921218,0.921218,0.921218,0.921218,
Bias: 0.789172 ; Weights:0.285686,0.285686,0.285686,0.285686,0.285686,0.285686,0.285686,
Bias: 0.199523 ; Weights:0.579662,0.579662,0.579662,0.579662,0.579662,0.579662,0.579662,
Bias: 0.891284 ; Weights:0.219399,0.219399,0.219399,0.219399,0.219399,0.219399,0.219399,
Ouput Layer Node Weights:
Bias: 0.354867 ; Weights:0.360490,0.360490,0.360490,0.360490,0.360490,0.360490,0.360490,0.360490,0.360490,0.360490,
Bias: 0.498970 ; Weights:0.186467,0.186467,0.186467,0.186467,0.186467,0.186467,0.186467,0.186467,0.186467,0.186467,
Output from network:
[ 0.  0.]
Network Errors:
[ 0.  0.]
Number of Inputs:  5
Number of Hidden Nodes:  [7, 10]
Number of Outputs:  2
Hidden Layer Node Weights:
Hidden Layer 1 :
Bias: 0.582162 ; Weights:0.094723,0.094723,0.094723,0.094723,0.094723,
Bias: 0.742770 ; Weights:0.531222,0.531222,0.531222,0.531222,0.531222,
Bias: 0.423768 ; Weights:0.436010,0.436010,0.436010,0.436010,0.436010,
Bias: 0.708006 ; Weights:0.097265,0.097265,0.097265,0.097265,0.097265,
Bias: 0.168453 ; Weights:0.099669,0.099669,0.099669,0.099669,0.099669,
Bias: 0.552546 ; Weights:0.103369,0.103369,0.103369,0.103369,0.103369,
Bias: 0.520979 ; Weights:0.094390,0.094390,0.094390,0.094390,0.094390,
Hidden Layer 2 :
Bias: 0.633451 ; Weights:0.261978,0.261978,0.261978,0.261978,0.261978,0.261978,0.261978,
Bias: 0.138037 ; Weights:0.687564,0.687564,0.687564,0.687564,0.687564,0.687564,0.687564,
Bias: 0.249768 ; Weights:0.126959,0.126959,0.126959,0.126959,0.126959,0.126959,0.126959,
Bias: 0.493174 ; Weights:0.496738,0.496738,0.496738,0.496738,0.496738,0.496738,0.496738,
Bias: 0.096446 ; Weights:0.069526,0.069526,0.069526,0.069526,0.069526,0.069526,0.069526,
Bias: 0.040770 ; Weights:0.446858,0.446858,0.446858,0.446858,0.446858,0.446858,0.446858,
Bias: 0.364845 ; Weights:0.921191,0.921191,0.921191,0.921191,0.921191,0.921191,0.921191,
Bias: 0.788768 ; Weights:0.285282,0.285282,0.285282,0.285282,0.285282,0.285282,0.285282,
Bias: 0.199309 ; Weights:0.579447,0.579447,0.579447,0.579447,0.579447,0.579447,0.579447,
Bias: 0.890688 ; Weights:0.218804,0.218804,0.218804,0.218804,0.218804,0.218804,0.218804,
Ouput Layer Node Weights:
Bias: 0.354944 ; Weights:0.360568,0.360568,0.360568,0.360568,0.360568,0.360568,0.360568,0.360568,0.360568,0.360568,
Bias: 0.495640 ; Weights:0.183138,0.183138,0.183138,0.183138,0.183138,0.183138,0.183138,0.183138,0.183138,0.183138,
Output from network:
[ 0.9713533   0.89454732]
Network Errors:
[ 0.0286467  -0.39454732]
Number of Inputs:  5
Number of Hidden Nodes:  [7, 10]
Number of Outputs:  2
Hidden Layer Node Weights:
Hidden Layer 1 :
Bias: 0.582123 ; Weights:0.094684,0.094684,0.094684,0.094684,0.094684,
Bias: 0.742727 ; Weights:0.531180,0.531180,0.531180,0.531180,0.531180,
Bias: 0.423676 ; Weights:0.435919,0.435919,0.435919,0.435919,0.435919,
Bias: 0.707852 ; Weights:0.097111,0.097111,0.097111,0.097111,0.097111,
Bias: 0.168256 ; Weights:0.099472,0.099472,0.099472,0.099472,0.099472,
Bias: 0.552310 ; Weights:0.103134,0.103134,0.103134,0.103134,0.103134,
Bias: 0.520702 ; Weights:0.094114,0.094114,0.094114,0.094114,0.094114,
Hidden Layer 2 :
Bias: 0.633428 ; Weights:0.261955,0.261955,0.261955,0.261955,0.261955,0.261955,0.261955,
Bias: 0.138026 ; Weights:0.687552,0.687552,0.687552,0.687552,0.687552,0.687552,0.687552,
Bias: 0.249656 ; Weights:0.126847,0.126847,0.126847,0.126847,0.126847,0.126847,0.126847,
Bias: 0.493134 ; Weights:0.496698,0.496698,0.496698,0.496698,0.496698,0.496698,0.496698,
Bias: 0.096256 ; Weights:0.069337,0.069337,0.069337,0.069337,0.069337,0.069337,0.069337,
Bias: 0.040665 ; Weights:0.446753,0.446753,0.446753,0.446753,0.446753,0.446753,0.446753,
Bias: 0.364835 ; Weights:0.921181,0.921181,0.921181,0.921181,0.921181,0.921181,0.921181,
Bias: 0.788619 ; Weights:0.285133,0.285133,0.285133,0.285133,0.285133,0.285133,0.285133,
Bias: 0.199230 ; Weights:0.579369,0.579369,0.579369,0.579369,0.579369,0.579369,0.579369,
Bias: 0.890472 ; Weights:0.218588,0.218588,0.218588,0.218588,0.218588,0.218588,0.218588,
Ouput Layer Node Weights:
Bias: 0.403629 ; Weights:0.409253,0.409253,0.409253,0.409253,0.409253,0.409253,0.409253,0.409253,0.409253,0.409253,
Bias: 0.280566 ; Weights:-0.031936,-0.031936,-0.031936,-0.031936,-0.031936,-0.031936,-0.031936,-0.031936,-0.031936,-0.031936,
Output from network:
[ 0.98200255  0.49998719]
Network Errors:
[  1.79974457e-02   1.28123978e-05]
Number of Inputs:  5
Number of Hidden Nodes:  [7, 10]
Number of Outputs:  2
Hidden Layer Node Weights:
Hidden Layer 1 :
Bias: 0.583912 ; Weights:0.096473,0.096473,0.096473,0.096473,0.096473,
Bias: 0.744685 ; Weights:0.533137,0.533137,0.533137,0.533137,0.533137,
Bias: 0.427857 ; Weights:0.440100,0.440100,0.440100,0.440100,0.440100,
Bias: 0.714864 ; Weights:0.104123,0.104123,0.104123,0.104123,0.104123,
Bias: 0.177101 ; Weights:0.108317,0.108317,0.108317,0.108317,0.108317,
Bias: 0.563032 ; Weights:0.113856,0.113856,0.113856,0.113856,0.113856,
Bias: 0.533270 ; Weights:0.106681,0.106681,0.106681,0.106681,0.106681,
Hidden Layer 2 :
Bias: 0.634391 ; Weights:0.262918,0.262918,0.262918,0.262918,0.262918,0.262918,0.262918,
Bias: 0.138504 ; Weights:0.688031,0.688031,0.688031,0.688031,0.688031,0.688031,0.688031,
Bias: 0.254184 ; Weights:0.131375,0.131375,0.131375,0.131375,0.131375,0.131375,0.131375,
Bias: 0.494817 ; Weights:0.498381,0.498381,0.498381,0.498381,0.498381,0.498381,0.498381,
Bias: 0.103808 ; Weights:0.076889,0.076889,0.076889,0.076889,0.076889,0.076889,0.076889,
Bias: 0.045126 ; Weights:0.451214,0.451214,0.451214,0.451214,0.451214,0.451214,0.451214,
Bias: 0.365257 ; Weights:0.921603,0.921603,0.921603,0.921603,0.921603,0.921603,0.921603,
Bias: 0.794936 ; Weights:0.291450,0.291450,0.291450,0.291450,0.291450,0.291450,0.291450,
Bias: 0.202585 ; Weights:0.582724,0.582724,0.582724,0.582724,0.582724,0.582724,0.582724,
Bias: 0.899748 ; Weights:0.227863,0.227863,0.227863,0.227863,0.227863,0.227863,0.227863,
Ouput Layer Node Weights:
Bias: 0.427837 ; Weights:0.433461,0.433461,0.433461,0.433461,0.433461,0.433461,0.433461,0.433461,0.433461,0.433461,
Bias: 0.280691 ; Weights:-0.031812,-0.031812,-0.031812,-0.031812,-0.031812,-0.031812,-0.031812,-0.031812,-0.031812,-0.031812,
Output from network:
[ 0.98597415  0.49999212]
Network Errors:
[  1.40258531e-02   7.87858898e-06]
Number of Inputs:  5
Number of Hidden Nodes:  [7, 10]
Number of Outputs:  2
Hidden Layer Node Weights:
Hidden Layer 1 :
Bias: 0.585145 ; Weights:0.097707,0.097707,0.097707,0.097707,0.097707,
Bias: 0.746030 ; Weights:0.534482,0.534482,0.534482,0.534482,0.534482,
Bias: 0.430723 ; Weights:0.442966,0.442966,0.442966,0.442966,0.442966,
Bias: 0.719682 ; Weights:0.108941,0.108941,0.108941,0.108941,0.108941,
Bias: 0.183231 ; Weights:0.114447,0.114447,0.114447,0.114447,0.114447,
Bias: 0.570403 ; Weights:0.121226,0.121226,0.121226,0.121226,0.121226,
Bias: 0.541914 ; Weights:0.115326,0.115326,0.115326,0.115326,0.115326,
Hidden Layer 2 :
Bias: 0.635056 ; Weights:0.263584,0.263584,0.263584,0.263584,0.263584,0.263584,0.263584,
Bias: 0.138830 ; Weights:0.688357,0.688357,0.688357,0.688357,0.688357,0.688357,0.688357,
Bias: 0.257323 ; Weights:0.134513,0.134513,0.134513,0.134513,0.134513,0.134513,0.134513,
Bias: 0.495965 ; Weights:0.499530,0.499530,0.499530,0.499530,0.499530,0.499530,0.499530,
Bias: 0.109095 ; Weights:0.082176,0.082176,0.082176,0.082176,0.082176,0.082176,0.082176,
Bias: 0.048149 ; Weights:0.454237,0.454237,0.454237,0.454237,0.454237,0.454237,0.454237,
Bias: 0.365542 ; Weights:0.921888,0.921888,0.921888,0.921888,0.921888,0.921888,0.921888,
Bias: 0.799203 ; Weights:0.295717,0.295717,0.295717,0.295717,0.295717,0.295717,0.295717,
Bias: 0.204850 ; Weights:0.584988,0.584988,0.584988,0.584988,0.584988,0.584988,0.584988,
Bias: 0.905974 ; Weights:0.234090,0.234090,0.234090,0.234090,0.234090,0.234090,0.234090,
Ouput Layer Node Weights:
Bias: 0.443970 ; Weights:0.449594,0.449594,0.449594,0.449594,0.449594,0.449594,0.449594,0.449594,0.449594,0.449594,
Bias: 0.280774 ; Weights:-0.031728,-0.031728,-0.031728,-0.031728,-0.031728,-0.031728,-0.031728,-0.031728,-0.031728,-0.031728,
Output from network:
[ 0.98814102  0.49999433]
Network Errors:
[  1.18589802e-02   5.66918449e-06]
Number of Inputs:  5
Number of Hidden Nodes:  [7, 10]
Number of Outputs:  2
Hidden Layer Node Weights:
Hidden Layer 1 :
Bias: 0.586091 ; Weights:0.098652,0.098652,0.098652,0.098652,0.098652,
Bias: 0.747058 ; Weights:0.535510,0.535510,0.535510,0.535510,0.535510,
Bias: 0.432909 ; Weights:0.445152,0.445152,0.445152,0.445152,0.445152,
Bias: 0.723364 ; Weights:0.112623,0.112623,0.112623,0.112623,0.112623,
Bias: 0.187943 ; Weights:0.119159,0.119159,0.119159,0.119159,0.119159,
Bias: 0.576035 ; Weights:0.126858,0.126858,0.126858,0.126858,0.126858,
Bias: 0.548521 ; Weights:0.121932,0.121932,0.121932,0.121932,0.121932,
Hidden Layer 2 :
Bias: 0.635567 ; Weights:0.264094,0.264094,0.264094,0.264094,0.264094,0.264094,0.264094,
Bias: 0.139078 ; Weights:0.688604,0.688604,0.688604,0.688604,0.688604,0.688604,0.688604,
Bias: 0.259737 ; Weights:0.136927,0.136927,0.136927,0.136927,0.136927,0.136927,0.136927,
Bias: 0.496839 ; Weights:0.500403,0.500403,0.500403,0.500403,0.500403,0.500403,0.500403,
Bias: 0.113189 ; Weights:0.086270,0.086270,0.086270,0.086270,0.086270,0.086270,0.086270,
Bias: 0.050436 ; Weights:0.456524,0.456524,0.456524,0.456524,0.456524,0.456524,0.456524,
Bias: 0.365758 ; Weights:0.922103,0.922103,0.922103,0.922103,0.922103,0.922103,0.922103,
Bias: 0.802425 ; Weights:0.298939,0.298939,0.298939,0.298939,0.298939,0.298939,0.298939,
Bias: 0.206559 ; Weights:0.586698,0.586698,0.586698,0.586698,0.586698,0.586698,0.586698,
Bias: 0.910654 ; Weights:0.238770,0.238770,0.238770,0.238770,0.238770,0.238770,0.238770,
Ouput Layer Node Weights:
Bias: 0.456056 ; Weights:0.461680,0.461680,0.461680,0.461680,0.461680,0.461680,0.461680,0.461680,0.461680,0.461680,
Bias: 0.280836 ; Weights:-0.031666,-0.031666,-0.031666,-0.031666,-0.031666,-0.031666,-0.031666,-0.031666,-0.031666,-0.031666,
Output from network:
[ 0.98955058  0.49999558]
Network Errors:
[  1.04494162e-02   4.41839926e-06]
Number of Inputs:  5
Number of Hidden Nodes:  [7, 10]
Number of Outputs:  2
Hidden Layer Node Weights:
Hidden Layer 1 :
Bias: 0.586858 ; Weights:0.099419,0.099419,0.099419,0.099419,0.099419,
Bias: 0.747890 ; Weights:0.536342,0.536342,0.536342,0.536342,0.536342,
Bias: 0.434677 ; Weights:0.446920,0.446920,0.446920,0.446920,0.446920,
Bias: 0.726344 ; Weights:0.115603,0.115603,0.115603,0.115603,0.115603,
Bias: 0.191776 ; Weights:0.122992,0.122992,0.122992,0.122992,0.122992,
Bias: 0.580593 ; Weights:0.131417,0.131417,0.131417,0.131417,0.131417,
Bias: 0.553869 ; Weights:0.127281,0.127281,0.127281,0.127281,0.127281,
Hidden Layer 2 :
Bias: 0.635981 ; Weights:0.264508,0.264508,0.264508,0.264508,0.264508,0.264508,0.264508,
Bias: 0.139278 ; Weights:0.688804,0.688804,0.688804,0.688804,0.688804,0.688804,0.688804,
Bias: 0.261701 ; Weights:0.138892,0.138892,0.138892,0.138892,0.138892,0.138892,0.138892,
Bias: 0.497543 ; Weights:0.501107,0.501107,0.501107,0.501107,0.501107,0.501107,0.501107,
Bias: 0.116537 ; Weights:0.089618,0.089618,0.089618,0.089618,0.089618,0.089618,0.089618,
Bias: 0.052273 ; Weights:0.458361,0.458361,0.458361,0.458361,0.458361,0.458361,0.458361,
Bias: 0.365931 ; Weights:0.922276,0.922276,0.922276,0.922276,0.922276,0.922276,0.922276,
Bias: 0.805010 ; Weights:0.301524,0.301524,0.301524,0.301524,0.301524,0.301524,0.301524,
Bias: 0.207930 ; Weights:0.588069,0.588069,0.588069,0.588069,0.588069,0.588069,0.588069,
Bias: 0.914395 ; Weights:0.242511,0.242511,0.242511,0.242511,0.242511,0.242511,0.242511,
Ouput Layer Node Weights:
Bias: 0.465700 ; Weights:0.471324,0.471324,0.471324,0.471324,0.471324,0.471324,0.471324,0.471324,0.471324,0.471324,
Bias: 0.280886 ; Weights:-0.031616,-0.031616,-0.031616,-0.031616,-0.031616,-0.031616,-0.031616,-0.031616,-0.031616,-0.031616,
Output from network:
[ 0.99055883  0.49999638]
Network Errors:
[  9.44117083e-03   3.61526904e-06]
#no_of_inputs,no_of_hidden_layers,nodes_in_hiddens,no_of_outputs,learning_rate
test_network = FeedForwardNet(5,1,[7],2,0.15)

test_network.debug_info()
training_vector = [0.45,0.5,0.55,0.6,0.65]
training_output = [1.0,0.75]

for x in range(5000):
    test_network.FeedForward(training_vector,training_output,Training=True)
    if(x%1000 == 0):
        test_network.debug_info()



test_network.debug_info()
Number of Inputs:  5
Number of Hidden Nodes:  [7]
Number of Outputs:  2
Hidden Layer Node Weights:
Hidden Layer 1 :
Bias: 0.357342 ; Weights:0.752028,0.752028,0.752028,0.752028,0.752028,
Bias: 0.451296 ; Weights:0.204235,0.204235,0.204235,0.204235,0.204235,
Bias: 0.378036 ; Weights:0.871455,0.871455,0.871455,0.871455,0.871455,
Bias: 0.368597 ; Weights:0.618812,0.618812,0.618812,0.618812,0.618812,
Bias: 0.466862 ; Weights:0.896672,0.896672,0.896672,0.896672,0.896672,
Bias: 0.135293 ; Weights:0.128040,0.128040,0.128040,0.128040,0.128040,
Bias: 0.135076 ; Weights:0.207825,0.207825,0.207825,0.207825,0.207825,
Ouput Layer Node Weights:
Bias: 0.882584 ; Weights:0.514103,0.514103,0.514103,0.514103,0.514103,0.514103,0.514103,
Bias: 0.445131 ; Weights:0.493296,0.493296,0.493296,0.493296,0.493296,0.493296,0.493296,
Output from network:
[ 0.  0.]
Network Errors:
[ 0.  0.]
Number of Inputs:  5
Number of Hidden Nodes:  [7]
Number of Outputs:  2
Hidden Layer Node Weights:
Hidden Layer 1 :
Bias: 0.357307 ; Weights:0.751992,0.751992,0.751992,0.751992,0.751992,
Bias: 0.451146 ; Weights:0.204086,0.204086,0.204086,0.204086,0.204086,
Bias: 0.377955 ; Weights:0.871374,0.871374,0.871374,0.871374,0.871374,
Bias: 0.368413 ; Weights:0.618627,0.618627,0.618627,0.618627,0.618627,
Bias: 0.466743 ; Weights:0.896552,0.896552,0.896552,0.896552,0.896552,
Bias: 0.134836 ; Weights:0.127582,0.127582,0.127582,0.127582,0.127582,
Bias: 0.134534 ; Weights:0.207283,0.207283,0.207283,0.207283,0.207283,
Ouput Layer Node Weights:
Bias: 0.882650 ; Weights:0.514169,0.514169,0.514169,0.514169,0.514169,0.514169,0.514169,
Bias: 0.444041 ; Weights:0.492206,0.492206,0.492206,0.492206,0.492206,0.492206,0.492206,
Output from network:
[ 0.97861302  0.963275  ]
Network Errors:
[ 0.02138698 -0.213275  ]
Number of Inputs:  5
Number of Hidden Nodes:  [7]
Number of Outputs:  2
Hidden Layer Node Weights:
Hidden Layer 1 :
Bias: 0.351264 ; Weights:0.745950,0.745950,0.745950,0.745950,0.745950,
Bias: 0.425883 ; Weights:0.178823,0.178823,0.178823,0.178823,0.178823,
Bias: 0.364027 ; Weights:0.857446,0.857446,0.857446,0.857446,0.857446,
Bias: 0.336322 ; Weights:0.586537,0.586537,0.586537,0.586537,0.586537,
Bias: 0.446180 ; Weights:0.875990,0.875990,0.875990,0.875990,0.875990,
Bias: 0.060519 ; Weights:0.053265,0.053265,0.053265,0.053265,0.053265,
Bias: 0.044986 ; Weights:0.117735,0.117735,0.117735,0.117735,0.117735,
Ouput Layer Node Weights:
Bias: 0.937793 ; Weights:0.569312,0.569312,0.569312,0.569312,0.569312,0.569312,0.569312,
Bias: 0.127406 ; Weights:0.175572,0.175572,0.175572,0.175572,0.175572,0.175572,0.175572,
Output from network:
[ 0.98350135  0.75006294]
Network Errors:
[  1.64986535e-02  -6.29355075e-05]
Number of Inputs:  5
Number of Hidden Nodes:  [7]
Number of Outputs:  2
Hidden Layer Node Weights:
Hidden Layer 1 :
Bias: 0.352545 ; Weights:0.747231,0.747231,0.747231,0.747231,0.747231,
Bias: 0.431235 ; Weights:0.184174,0.184174,0.184174,0.184174,0.184174,
Bias: 0.367009 ; Weights:0.860429,0.860429,0.860429,0.860429,0.860429,
Bias: 0.343289 ; Weights:0.593504,0.593504,0.593504,0.593504,0.593504,
Bias: 0.450622 ; Weights:0.880432,0.880432,0.880432,0.880432,0.880432,
Bias: 0.075712 ; Weights:0.068459,0.068459,0.068459,0.068459,0.068459,
Bias: 0.063511 ; Weights:0.136260,0.136260,0.136260,0.136260,0.136260,
Ouput Layer Node Weights:
Bias: 0.969181 ; Weights:0.600700,0.600700,0.600700,0.600700,0.600700,0.600700,0.600700,
Bias: 0.126334 ; Weights:0.174500,0.174500,0.174500,0.174500,0.174500,0.174500,0.174500,
Output from network:
[ 0.98683232  0.75004163]
Network Errors:
[  1.31676849e-02  -4.16337819e-05]
Number of Inputs:  5
Number of Hidden Nodes:  [7]
Number of Outputs:  2
Hidden Layer Node Weights:
Hidden Layer 1 :
Bias: 0.353461 ; Weights:0.748147,0.748147,0.748147,0.748147,0.748147,
Bias: 0.435064 ; Weights:0.188004,0.188004,0.188004,0.188004,0.188004,
Bias: 0.369132 ; Weights:0.862552,0.862552,0.862552,0.862552,0.862552,
Bias: 0.348217 ; Weights:0.598432,0.598432,0.598432,0.598432,0.598432,
Bias: 0.453771 ; Weights:0.883581,0.883581,0.883581,0.883581,0.883581,
Bias: 0.086779 ; Weights:0.079526,0.079526,0.079526,0.079526,0.079526,
Bias: 0.076946 ; Weights:0.149695,0.149695,0.149695,0.149695,0.149695,
Ouput Layer Node Weights:
Bias: 0.990775 ; Weights:0.622294,0.622294,0.622294,0.622294,0.622294,0.622294,0.622294,
Bias: 0.125578 ; Weights:0.173744,0.173744,0.173744,0.173744,0.173744,0.173744,0.173744,
Output from network:
[ 0.9887527   0.75003108]
Network Errors:
[  1.12473030e-02  -3.10780901e-05]
Number of Inputs:  5
Number of Hidden Nodes:  [7]
Number of Outputs:  2
Hidden Layer Node Weights:
Hidden Layer 1 :
Bias: 0.354176 ; Weights:0.748862,0.748862,0.748862,0.748862,0.748862,
Bias: 0.438055 ; Weights:0.190995,0.190995,0.190995,0.190995,0.190995,
Bias: 0.370785 ; Weights:0.864204,0.864204,0.864204,0.864204,0.864204,
Bias: 0.352035 ; Weights:0.602249,0.602249,0.602249,0.602249,0.602249,
Bias: 0.456215 ; Weights:0.886025,0.886025,0.886025,0.886025,0.886025,
Bias: 0.095531 ; Weights:0.088278,0.088278,0.088278,0.088278,0.088278,
Bias: 0.087531 ; Weights:0.160280,0.160280,0.160280,0.160280,0.160280,
Ouput Layer Node Weights:
Bias: 1.007191 ; Weights:0.638711,0.638711,0.638711,0.638711,0.638711,0.638711,0.638711,
Bias: 0.124995 ; Weights:0.173160,0.173160,0.173160,0.173160,0.173160,0.173160,0.173160,
Output from network:
[ 0.99003649  0.75002477]
Network Errors:
[  9.96350660e-03  -2.47667398e-05]
Number of Inputs:  5
Number of Hidden Nodes:  [7]
Number of Outputs:  2
Hidden Layer Node Weights:
Hidden Layer 1 :
Bias: 0.354764 ; Weights:0.749449,0.749449,0.749449,0.749449,0.749449,
Bias: 0.440512 ; Weights:0.193452,0.193452,0.193452,0.193452,0.193452,
Bias: 0.372138 ; Weights:0.865557,0.865557,0.865557,0.865557,0.865557,
Bias: 0.355149 ; Weights:0.605363,0.605363,0.605363,0.605363,0.605363,
Bias: 0.458211 ; Weights:0.888021,0.888021,0.888021,0.888021,0.888021,
Bias: 0.102785 ; Weights:0.095531,0.095531,0.095531,0.095531,0.095531,
Bias: 0.096275 ; Weights:0.169024,0.169024,0.169024,0.169024,0.169024,
Ouput Layer Node Weights:
Bias: 1.020399 ; Weights:0.651918,0.651918,0.651918,0.651918,0.651918,0.651918,0.651918,
Bias: 0.124520 ; Weights:0.172685,0.172685,0.172685,0.172685,0.172685,0.172685,0.172685,
Output from network:
[ 0.99096971  0.75002057]
Network Errors:
[  9.03028733e-03  -2.05702842e-05]

So as you can see, the netwoks all converge to the expected outputs. For the next post, I’ll be going through training networks on stock data and finding out how accurate the network can forecast stock prices.

Thanks for reading! Please feel free to post any questions/comments/bug fixes!

Advertisements

3 thoughts on “Neural Networks and the Stock Market Pt. 2 – Network Implementation

  1. So you’re just using neural networks for regression using only the price information? I doubt you’d get far with price data alone, but I suppose that’s not the point of this blog series.

    I feel like you should have explained more in depth of what you’re trying to accomplish with the neural network rather than jumping into (obviously predict stock prices, but how is the neural network allowing you to do so?). You touch on how rather briefly, you mention regression or classification, I would have preferred it to be a bit more explicit and clear in the first paragraph.

    Another point I’d like to make is, why a neural network? Why not linear regression or genetic programming? Maybe you mentioned this because I’m going, to be honest, and say I skimmed through this article.

    Also, correct me if I’m mistaken, but a perception cannot model xor (with a linear activation function) not just a single neuron (node in your words).

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s