# My path finding algorithm.

3 replies to this topic

### #1 ga05as

ga05as

GMC Member

• GMC Member
• 881 posts

Posted 29 December 2011 - 09:24 PM

Hello,
I am attempting to develop an algorithm which finds the shortest path between two points on a square grid, whilst avoiding obstacles.
No diagonal movements are allowed, only up down left and right.

The algorithm is split into two parts:
-The labeling procedure,
-The actual path finding algorithm.

At the moment this is it:

----Labeling procedure----

1) Label the start box 0.

2) Label all boxes surrounding the start box that are not filled with an obstacle with 1,
If no such box exists and the end box is not yet labeled, then no path is available and stop.

3) Label all the boxes surrounding the boxes labeled 1 that are not filled with an obstacle (and not yet labeled) with 2.
If no such box exists and the end box is not yet labeled, then no path is available and stop.

3) Label all the boxes surrounding the boxes labeled 2 that are not filled with an obstacle (and not yet labeled) with 3.
If no such box exists and the end box is not yet labeled, then no path is available and stop.

4) Continue this procedure until all boxes that are not filled with an obstacle have a label.

----Path finding algorithm----

1) Start at box 0, let the current path be P.

2) If this is the end box then stop, otherwise go to step 3.

3) Let n be the label of the current box, and add the current box to P.

4) At random choose a box connected to the current box that is labeled with n+1 and is not redundant,
- If such a box exists, make that the current box, and go to step 2.

-If no such box exists then treat the current box as redundant, remove it from P and never return to it,

----------------------------------------------------------------------------------------

Here are two animations showing the two algorithms in action...

Labeling procedure:

Path finding algorithm:

As you can see from the second animation, because the next box is chosen randomly (when there is a choice) it is possible to go the "wrong way" and have to back track.
Can you think of a "smart" way of picking which box to go to next so to minimize the amount of back tracking?

Edited by ga05as, 29 December 2011 - 10:17 PM.

• 0

### #2 ga05as

ga05as

GMC Member

• GMC Member
• 881 posts

Posted 29 December 2011 - 11:15 PM

Improved it i think.

Instead of the path finding algorithm, if you work backwards from the end it will remove the randomness and should find the optimum solution.
• 0

### #3 tangibleLime

tangibleLime

Lunatic

• GMC Elder
• 2520 posts
• Version:GM:HTML5

Posted 30 December 2011 - 04:35 AM

To me it looks like you're going for a Markov Decision Process (MDP) model (http://en.wikipedia....ecision_process).

Using MDPs, you can use something like SARSA or Q-Learning to determine the utilities of each state (essentially the desirability to be in each box) to determine an optimal policy. A policy (usually denoted by the Greek letter $\pi$) is not exactly an actual path - it is a function that tells the agent what to do in each state. This way, regardless of starting position, an optimal path can be implemented.

Without getting too far into it..

You can actually (even by hand) calculate the utility of each state using Bellman equations - one for each box:

$U(s)= R(s) + \gamma \sum_{s'} P(s'|s,\pi(s)) U(s')$

Where,
U(s) = Utility of state s
R(s) = Reward of being in state s
$\gamma$ = Learning discount (should be between 0 and 1), which allows you to tell the agent how much to consider future states
$P(s'|s,\pi(s))$ = Probability of moving from state s to s' given the optimal policy

The idea is to find a good reward assignment situation to make the agent want to reach the goal as quick as possible. Changing the reward values of states can radically alter the optimal path. For example, if all states that are not a goal have a reward of 1 and the goal has a reward of 0.9, the optimal policy will force the agent to never reach the goal.

I don't like step 4 of your pathfinding algorithm - how it chooses randomly. True, it is good to occasionally choose randomly, but it would drastically increase efficiency to use some sort of heuristic. Branching off about what I said before, there are methods that are $\epsilon$-greedy. $\epsilon$ is a value between 0 and 1 that denotes the probability of taking a random action ($\epsilon$) or to take the action defined by the current optimal policy (1-$\epsilon$). Use this with SARSA and you'll get an efficient method of computing an optimal policy and therefore an optimal path in any environment, around any obstacles, starting from any box.
• 0

### #4 paul23

paul23

GMC Member

• Global Moderators
• 4005 posts
• Version:GM:Studio

Posted 31 December 2011 - 02:16 AM

Above seems to like a (slightly different implemented) version of dijkstra's algorithm for pathfinding (http://en.wikipedia....tra's_algorithm), which grows to each size equally. (If you let this algorithm run in an "open" field it would give a diamond-shape growth).

In pathfinding there are often 2 numbers which are named: the heuristic "cost" & the move "cost" (called h & g). g is the amount of time/difficulty/whatever you value to reach from point A (start), the point you are analyzing. The heuristic cost is a rough estimate of the cost from the point B (end).

In what you shown above, the "label number" would be the g.

An optimizing is done by reducing the amount of nodes-to-check. This is done by a heuristic, instead of a simple label number, each cell would gain a h & g (label number) cost. Then you always take the cell with the lowest summation of both numbers. With a slight change to the algorithm you get A*, (open, closed list are simple terms for lists, where open list is always sorted by "h+g" and closed list is simply a quick-lookup table):

• add the start cell to the open list
• UNTIL open list is empty OR start node is added to the closed list
• remove cell from open list, add it to closed list
• look at all walkable cells (called new cell) around the cell (called old cell)
• IF new cell IS NOT in closed list
• Calculate g for new cell (basically old cell + 1)
• IF new cell IS NOT in open list OR calculated d is SMALLER than previously calculated g (this can happen if you have an object blocking the way and you can move in 2 ways around it)
• Calculate H to the new cell (manhatten distance, diagonal distance, pythagoras distance, or use an adapted distance calculation)
• Add (or replace the value) new cell to open list: with the sorting value of d + h.
• set old cell as "parent" to the new cell (so you can "walk it back") - store the parent, g & h in the cell
[/list]
[*]IF end node IS NOT added to the closed list
• No path can be found
[*]ELSE
• Take the parent node of end node, and add it to the path
• while parent node IS NOT start node
• Take the parent of the parent not and add it to the path (you will basically walk the path backwards here, using the stored parents)
[/list]
Now the path is the shortest path "in reverse". Using A* will yield in a typical "rectangle" shaped growth when used in an open field. - As I can't really explain it too well in a short manner here I recommend you reading this site. It is a bit of a read, but very understandable & covering a lot of pathfinding problematics.
• 0
Extensions
(gex; GM:Studio)
Math, logic parser; Executing user inputted code in a complete safe manner (gex/script; GM:Studio)
Extended constants; (gex/script; all versions)

Other useful information
Debugging; a task on it's own
Do's and Don'ts in gamemaker

#### 0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users