Artificial Intelligence Programming Basics

Artificial Intelligence Programming Basics -magoosh

Artificial intelligence can be understood as making a computer-controlled robot, computer itself, or a software think like humans do, i.e. intelligently.

For accomplishing AI, a detailed study is made on the way human brain thinks, learns, makes decisions, and even operates when attempting to solve problems. With the outcomes of this study, a base is formed for the development of intelligent systems and software.

What is an Agent?

Any object or material capable of identifying its environment using ‘sensors’ and then working upon that environment through ‘effectors’ is classified as an agent.

Human agents consist of sensory organs parallel to the sensors, namely, eyes, ears, tongue, nose, and skin. Corresponding to effectors, they have legs, hands, and mouths.

Next, we have robotic agents, cameras, and infrared range finders work for the sensors and actuators with various motors as effectors.

Lastly, a software agent using encoded bit strings for the purpose of its actions and programs.

For an agent to be ideal and rational, it must be competent enough to perform actions maximising its performance value on the scale of –

  • Understanding sequence
  • Knowledge foundation laid inside

Whereas the rationality is determined by the below four factors –

  • Measures of performance, deciding the level of success achieved.
  • Percept sequence maintained by the agent till date.
  • Previous information known by the agent about its environment.
  • Actions carried out.

A rational agent practices the right action always, which is defined as the action achieving maximum success for the agent in that particular percept sequence. Various problems solved by the agent are designated by Environment, Sensors, Performance Measure, and Actuators. Abbreviated as PEAS.

Environment

Few programs are operational in a complete artificial environment linked to the database, keyboard inputs, and character outputs displayed on screens and computer file systems.

Contrary to this, software agents such as softbots or software robots grow in enormous softbot domains. Simulator present comprises a complex and detailed environment. Choices are made from a long array of actions by these software agents. For an instance, a softbot made for scanning the online likes and dislikes of customers and then displaying items of their interest, are working in both artificial as well as real environments.

One of the most frequently used artificial environments is the ‘Turing Test Environment’ wherein one real and a couple other artificial agents are tested at an equal level. This is highly challenging for a software agent, who is required to perform as a bot along with a human as well.

Brute-force Search Algorithms

Depth First Search

This is executed in repetition with the LIFO stack data structure. Sets of nodes exactly identical to those in Breadth-First method are created but in a varied order.

In every iteration occurring from the root node to the leaf, the nodes on singe paths are stored and therefore, the space required for doing so is linear. For a branching factor b and depth as m, storage space comes to be bm.

Disadvantage – Such an algorithm may quit terminating and continue on one path infinitely. Hence, we need to choose cut-off depth. Cases when the chosen depth is less than the ideal cut-off depth d, this algorithm is likely to fail. For inverse cases, we see an increment in execution time.

The complexity of such a search is dependent on the number of paths. Also, this search is incapable of checking for duplicate nodes.

Breadth First Search

This search begins from the root node, examines the neighbouring nodes and then proceeds to the next level neighbours. One tree is generated at a time until we find a solution. FIFO queue data structure is used for implementation. Breadth-first search method highlights the shortest path to the solution.

If branching factor (average number of child nodes present for a particular node) = b and depth = d, then nodes present at level d = bd.
Total nodes generated in worst case is b + b2 + b3 + … + bd.

Disadvantage – Individual levels of nodes are saved for the creation of a new one, so a lot of memory space is occupied. Also, for the storage of nodes, space requirements increase exponentially!

Since each level of nodes is saved for creating next one, it consumes a lot of memory space. Space requirement to store nodes is exponential.

The complexity of this search is dependent on the number of nodes and it can very easily check duplicate nodes too.

Iterative Deepening Depth-First Search

A depth-first search is performed to level 1, starts again, operates a whole depth-first search to level 2 and then carries on till a solution is achieved.

A new node is created only once all the lower nodes have been generated. Stacks of nodes are saved. When a solution at depth d is found, the algorithm terminates. Number of nodes generated at depth d are bd and at depth d-1 is bd-1.

Uniform Cost Search

For incrementing the cost of a path to the node, we use sorting. Any least cost node is expanded first. This search is similar to Breadth first search when the cost of each transition is identical.

Paths are recognised in ascending order of their costs.

The biggest disadvantage here is the existence of long paths with cost ≤ C*. And so, a uniform cost search must explore each.

Heuristic Search Algorithms

A* Search

The most proficient way of performing a best-first search. This search focuses on expanding more efficient paths first thereby, avoiding expansion of paths already expensive.

f(n) = g(n) + h(n), where
g(n) depicts the cost till now to reach the node
h(n) cost predicted in reaching the goal from the node
f(n) total cost estimated for the path to the goal through n. The process is carried by priority sequence when f(n) is increased.

Pure Heuristic Search

This particular helps in expansion of nodes according to heuristic values order. Two lists are made, one containing all the previously expanded nodes i.e. closed list and another open list for recently created but not yet expanded nodes.

During each repetition, the node possessing the least heuristic value expands with the creation of all its child nodes and gets placed in the closed list. After this, the heuristic function operates on to the child nodes and seats them on the basis of their heuristic values in the open list. During processing the longer paths are discarded and the shorter ones are preserved for later use.

Greedy Best First Search

It first estimates the node closest to the goal and then expands them based on f(n) = h(n). The priority queue is used for the expansion.

The only disadvantage encountered here is that the search is not optimal as it can get stuck in loops.

In this article, you learnt some basics of Artificial Intelligence. As you go on exploring the field more, you will realize that it is truly amazing!

Comments are closed.


Magoosh blog comment policy: To create the best experience for our readers, we will only approve comments that are relevant to the article, general enough to be helpful to other students, concise, and well-written! 😄 Due to the high volume of comments across all of our blogs, we cannot promise that all comments will receive responses from our instructors.

We highly encourage students to help each other out and respond to other students' comments if you can!

If you are a Premium Magoosh student and would like more personalized service from our instructors, you can use the Help tab on the Magoosh dashboard. Thanks!