# Neural Nets!

10 replies to this topic

### #1 coolkat677

coolkat677

GMC Member

• GMC Member
• 113 posts

Posted 19 August 2009 - 05:20 AM

Neural Networks!

1. Introduction on Neural Networks

2. Applications of Neural Networks

3. How the Scripts Work

4. Scripts Download

1. Introduction on Neural Networks
Those who are new to Neural Networks are probably asking: "What are these so called "Neural Networks" people are talking about these days? And what are so darn special about them?"

Well the first thing that you need to know about Neural Networks -- or NNs for short-- is that it's a very simple concept to grasp. What NNs are based off of are the neurons in a brain (if you don't know what neurons are here's a good link).

The basic make-up of a neuron is essentially 4 main parts. You have the input, which takes in all the information. You have the weight, which usually multiplies the input by a specific number. The hidden level, adds up specific multiplied inputs with the weights. And lastly you have the output; the output takes in all the hidden layers and puts them all together. If the total is past a certain point, it shoots a signal with a specific number to something else that takes in that output.

That is what an neural network is made up of. Sometimes only a dozen of these neurons, but there could be more. So if you're still iffy on neural networks I suggest reading a very good and easy to understand tutorial on them. Found here.

2. Applications of Neural Networks
If you read the previously stated tutorial, you probably have a good grasp on the uses of NNs. If you didn't read the tutorial then here are some of the basic applications of neural networks.

• Image Recognition
• Pattern Recognition
• Object Avoidance
• Following an Object
• Even a combination of all them

These are just the basic applications of neural networks, but in fact they can do so much more! You can even make a learning ai if you're good enough...

3. How the Scripts Work
The scripts work in a very simple yet complex way. The way they work is through a lot of interconnected ds_grids. There are four sets of grids, input, hidden, output, and weight. The input takes the input variables, and sends it to all of the connected weights. Then, then weights multiply it by their weight and send that input to the specified hidden node. The hidden node then adds up all of the weights and finally sends it to the output node to be stored. Lastly, you can get the outputs and use them to your advantage.

First before you do anything! Please add these constants

Variable Value

input 1
hidden 2
output 3
weight 4
NIA -2*3.141592653
ON 2*3.141592653
OFF -2*3.141592653

Now onto how to use the scripts. First you have to create the actual Neural Net, just use the script create_NN(). That returns an index of that NN so that you can use it for all the other scripts.

NOTE: The Neural Net Index or NNI is what the returned value from the create_NN() script. It is used for ALL the rest of the scripts. The reason why is because you have to specify which node that you are using is in which neural network, don't forget that!

Next, you have to create your inputs. Use the script create_input(NNI, Variable Name) to create an input. The variable name is the variable that goes into the input. You have to actually create that variable somewhere else, the scripts will automatically get the values later on.

After you created all your inputs you have to create the hidden nodes next. Use the script: create_hidden(NNI) to create a single hidden node.

After you're done making your hidden nodes, you have to create the output layer next with the script: create_output(NNI).

Now you're done creating the basic infrastructure of your neural net! What you have to now is to link everything together. First you have to link the inputs with the hidden layer. You don't have to link inputs to weights because it's automatically done when connecting an input to a hidden layer. To do this use the script link_nodes(NNI, type1, type2, node id, node id). When using the script you have to specify what the kind of nodes you are linking together, that would be the arguments type1 and type2. The node ids are the indexes that are returned from creating the nodes.

An example use of it would be: link_nodes(MovementNN, input, hidden, HandInput, HandHiddenLayer).

When you are done connecting everything you have to set the weights (they default to 0). Using the set_weight(NNI, weight id, value) you can set the weight with the specified value.

Now you're done creating and setting everything! Onto the step event.

The first thing that you have to do in the step event is to refresh all the variables you want to use in your NN. After that just use the script refresh_variables(NNI) to actually set the neural net in motion. After that you can use the script get_output(NNI, output id) to get your final output of the spcified Neural Net of the specified output.

Now you're done! You can use the output for anything you like! Wasn't that easy? <--Sarcasm?

Last note: (if you didn't catch this the first time) All the creation scripts for the NN, input, hidden, output, and weights, return the index of that specific node. And through a funky programming glitch, all the indexes start at 0 EXCEPT weights, they start at 1.

4. Scripts Download
Now if you're PUMPED about making a neural net, here are some download links

Example
File Front
Box.net

Scripts
File Front
Box.net

Memory Scripts
File Front
Box.net

EDIT: The scripts and examples are now all commented

Comments, questions, suggestions, and bug sightings are all welcome!

Right, these are free to use though I just ask you put my name in somewhere. It's not a necessity, it's just that I don't want people taking credit for my work. Thanks in advance! (By the way my name is Sam Rohde)

Hope this helps! Good Luck!

Edited by coolkat677, 19 August 2009 - 10:39 PM.

• -1

### #2 Rexhunter99

Rexhunter99

GMC Member

• GMC Member
• 922 posts

Posted 09 October 2009 - 04:59 PM

Lol I saw the title and immediately though of Jurassic Park: Operation Genesis

Interesting indeed, you have simplified this down a lot... infact I am terrible when it comes to AI outside simple if/then/else but I understand this somewhat. Good job, this has earned a browser favorite
• 0

### #3 DZiW

DZiW

GMC Member

• GMC Member
• 727 posts

Posted 09 October 2009 - 08:59 PM

Hello coolkat.

Couldn't you find a better host without ero/lamer ADs and other garbage? FileFront keeps nagging:

File is Unavailable.
The file you are attempting to download is not currently available on our servers or is being processed. Please try your download in a few minutes.

And BOX says that

It appears that your firewall may be blocking Box.net or you are encountering an error.

Please contact your IT administrator to configure your firewall to recognize all sub-domains of .box.net and .boxcdn.net. The ports that should be opened for these domains are 80 and 443.

Why, my Admin and I think it should NOT be opened for any alien domains.

Anyway it did show the link I needed to download)
TY

Edited by DZiW, 09 October 2009 - 09:11 PM.

• 0

### #4 coolkat677

coolkat677

GMC Member

• GMC Member
• 113 posts

Posted 09 October 2009 - 09:16 PM

Thanks for your replies everyone! Hopefully this system will work for people wanting easy NNs.

And I tried all the download sites and it seemed that only the memory scripts from FileFront wasn't working too well. Have you tried the box.net mirror? That site works very nicely.
• 0

### #5 hanson

hanson

GMC Member

• GMC Member
• 444 posts
• Version:GM8

Posted 09 October 2009 - 11:24 PM

Yay! I've always wanted to toy with NNs but never got around to it. Now I can easily Thanks!

-hanson
• 0

### #6 coolkat677

coolkat677

GMC Member

• GMC Member
• 113 posts

Posted 13 October 2009 - 11:50 PM

Hey thanks for all of your downloads and posts! Though please, if you experience a bug of any sort or want to see new features, don't hesitate to ask! I probably won't do any improvements if people like it as is (busy with work / school)
• 0

### #7 score_under

score_under

Least kawaii

• GMC Member
• 1319 posts

Posted 14 October 2009 - 08:35 PM

It appears that your firewall may be blocking Box.net or you are encountering an error.

Please contact your IT administrator to configure your firewall to recognize all sub-domains of .box.net and .boxcdn.net. The ports that should be opened for these domains are 80 and 443.

Why, my Admin and I think it should NOT be opened for any alien domains.

It's funny when people who don't know what they're doing think they know what they're doing.

The second sentence means your firewall may be blocking either the DNS servers or the outbound connections, and the ports to open are outbound, not inbound, meaning you can only use that firewall exception to connect from inside, not from outside. It opens up exactly zero vulnerabilities.

What is more likely, however, is that you were experiencing transient network problems.
• 0

### #8 sabriath

sabriath

12013

• GMC Member
• 3149 posts

Posted 31 October 2009 - 08:55 PM

I personally like neural networks, and this seems good enough, just a few ideas though.

I think the scripts should be more easily automated. For example:

nn_create() -- returns an NNI
nn_add_input(nni, name) -- adds the variable name to the constructor of the NN at current position
nn_add_output(nni, name, deltaname) -- adds the variable names to the constructor of the NN at current position
nn_add_hidden(nni, size) -- adds 'size' number of nodes meshed from previous block at current position
nn_begin(nni) -- places a marker at the current position for recursion (hopfield networks)
nn_end(nni) -- meshes to the last marked position
nn_compile(nni) -- returns an NNC, takes all information from the NNI to create the neural network itself

nn_step(nnc) -- calculates 1 iteration through the neural network, updating output variables
nn_train(nnc) -- using the difference between the output name and output deltaname to come up with an error at position, change weights through 1 iteration of training toward 0 error.

Weights should be automatically randomized when compiled. Because of this, possibly have:

nn_save(fileid, nnc) -- saves the current neural net solution into the binary opened file at current position
nn_load(fileid) -- loads the neural network solution from the binary opened file at current position, returns NNC

In a teacher/student arrangement (or father/son setup), we would have 1 net train another net in a controlled situation to create somewhat of a life cycle:

nn_teach(destnnc, sourcennc, tolerance) -- trains 'destnnc' neural network using 'sourcennc' until the error becomes less than 'tolerance'

When you have mother and father situation, you can still use the above formula by copying one neural network and training with the other:

nn_copy(nnc) -- returns an nnc that is an exact duplication

So, for example:

```i = nn_create();
nn_add_input(i, "hunger");
nn_add_input(i, "thirst");
nn_add_hidden(i, 5);
nn_add_hidden(i, 5);
nn_add_output(i, "direction", "tdirection");
nn_add_output(i, "speed", "tspeed");
nnc = nn_compile(i);
with instance_create(x, y, obj_bot)
{
tspeed = 3;
hunger = 1000;
thirst = 1000;
nnc = other.nnc;
}

//obj_bot step
thirst -= 2;
hunger -= 1;
if thirst < 0 instance_destroy();
if hunger < 0 instance_destroy();

if thirst < 300
target = instance_nearest(x, y, obj_water);
else if hunger < 300
target = instance_nearest(x, y, obj_food);
else
{
nn_step(nnc);
break;
}
tdirection = point_direction(x, y, target.x, target.y);
nn_step(nnc);
nn_train(nnc);

//just for fun, collision with obj_bot (what this is)
i = other.nnc;
hunger /= 2;
thirst /= 2;
with instance_create(x, y, obj_bot)
{
nnc = nn_copy(other.nnc);
nn_teach(nnc, other.i, 0.5);
tspeed = 3;
hunger = other.hunger;
thirst = other.thirst;
}

//collision with obj_food
hunger = min(hunger + 10, 1000);

//collision with obj_water
thirst = min(thirst + 10, 1000);```

• 0

### #9 Universal_X

Universal_X

GMC Member

• New Member
• 126 posts

Posted 05 March 2010 - 11:58 PM

How advanced is this?

Do you use the gaussian function and the sigmoid function?

and how many combinations can eatch neuron send?

Looks cool thou! I've tried to do this in a week
• 0

### #10 Sammi3

Sammi3

GMC Member

• GMC Member
• 78 posts
• Version:GM8

Posted 01 December 2012 - 01:21 PM

I've been interested in making Neural Networks for some time now and now that I found out that it is possible on GM. It makes it much easier. Firstly, what is a weight exacty? I know it's a number that multiplies the input but what does it compare to towards a biological neuron? Is it the strength of the connection between the input and hidden nodes? Also, can u explain how your version of a neural network learns? Thanks in advance.
• 0

### #11 sabriath

sabriath

12013

• GMC Member
• 3149 posts

Posted 24 March 2013 - 01:53 PM

I've been interested in making Neural Networks for some time now and now that I found out that it is possible on GM. It makes it much easier. Firstly, what is a weight exacty? I know it's a number that multiplies the input but what does it compare to towards a biological neuron? Is it the strength of the connection between the input and hidden nodes? Also, can u explain how your version of a neural network learns? Thanks in advance.

In biology, the firing of the axons of a neuron across the synaptic gap releases an X amount of chemicals, which are almost immediately picked up by the receiving node.  The amount of chemicals released can be directly related to a value known as "input."  On the outside of the dendrites (the input side) there are Ca++ voltage gates that limit the potential of the entering chemicals (which can be thought of as fractioning the input value).  Inside the dendrites are agents that uptake the input chemicals and give off an electrical signal across the cell membrane, the voltage potential varies depending on the specific uptake chemicals provided in that area of the dendrite (which can also be seen as fractioning or rather multiplying by the input).  When the voltage reaches the soma (right before the axon), certain blocking chemicals are attached to the cell wall which absorb some of the voltage potential (this is similar to testing against a bias value), which allows the transmission of the potential to continue only if it is a sufficient load.

There are other processes involved, but this is the basic idea on how ANN (artificial neural network) came around.  As for how this version "learns," I think you actually have to do it yourself (I'm not seeing anything in the OP about it)....maybe if there's enough interest, I'll write my own scripts.

• 0

#### 0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users