This a common question on this board:
What is reasoning?
In computer science and AI when we say "reasoning" we mean that we have a
theory and we can derive the consequences of the theory by application of some
inference procedure.
A theory is a set of facts and rules about some environment of interest: the
real world, mathematics, language, etc. Facts are things we know (or assume)
to be true: they can be direct observations, or implied, guesses. Rules are
conditionally true and so most easily understood as implications: if we know
some facts are true we can conclude that some other facts must also be true.
An inference procedure is some system of rules, separate from the theory, that
tells us how we can combine the rules and facts of the theory to squeeze out
new facts, or new rules.
There are three types of reasoning, what we may call modes of inference:
deduction, induction and abduction. Informally, deduction means that we start
with a set of rules and derive new unobserved facts, implied by the rules;
induction means that we start with a set of rules and some observations and
derive new rules that imply the observations; and abduction means that we
start with some rules and some observations and derive new unobserved facts
that imply the observations.
It's easier to understand all this with examples.
One example of deductive reasoning is planning, or automated planning and
scheduling, a field of classical AI research. Planning is the "model-based
approach to autonomous behaviour", according to the textbook on planning by
Geffner and Bonnet. An autonomous agent starts with a "model" that describes
the environment in which the agent is to operate as a set of entities with
discrete states, and a set of actions that the agent can take to change those
states. The agent is given a goal, an instance of its model, and it must find
a sequence of actions, that we call a "plan", to take the entities in the
mo