YeGoblynQueenne 23,742 karma 10y 5m on HN HN profile →
This a common question on this board:

What is reasoning?

In computer science and AI when we say "reasoning" we mean that we have a theory and we can derive the consequences of the theory by application of some inference procedure.

A theory is a set of facts and rules about some environment of interest: the real world, mathematics, language, etc. Facts are things we know (or assume) to be true: they can be direct observations, or implied, guesses. Rules are conditionally true and so most easily understood as implications: if we know some facts are true we can conclude that some other facts must also be true. An inference procedure is some system of rules, separate from the theory, that tells us how we can combine the rules and facts of the theory to squeeze out new facts, or new rules.

There are three types of reasoning, what we may call modes of inference: deduction, induction and abduction. Informally, deduction means that we start with a set of rules and derive new unobserved facts, implied by the rules; induction means that we start with a set of rules and some observations and derive new rules that imply the observations; and abduction means that we start with some rules and some observations and derive new unobserved facts that imply the observations.

It's easier to understand all this with examples.

One example of deductive reasoning is planning, or automated planning and scheduling, a field of classical AI research. Planning is the "model-based approach to autonomous behaviour", according to the textbook on planning by Geffner and Bonnet. An autonomous agent starts with a "model" that describes the environment in which the agent is to operate as a set of entities with discrete states, and a set of actions that the agent can take to change those states. The agent is given a goal, an instance of its model, and it must find a sequence of actions, that we call a "plan", to take the entities in the mo

Coverage
We've seen 4 of ~10,342 submissions
Full eval: 1 Lite-only: 1 Unevaluated: 2
Deep Read section-level analysis · 1 full evaluation
HRCB +0.27 range 0.00 · +100% / 0% / 0%−
E-full / S +0.18 / +0.10 SETL +0.12
Signals EQ 0.45 SO 0.66 TD 0.30
UDHR Fingerprint Preamble: ND Article 1: ND Article 2: ND Article 3: ND Article 4: ND Article 5: ND Article 6: ND Article 7: ND Article 8: ND Article 9: ND Article 10: ND Article 11: ND Article 12: ND Article 13: ND Article 14: ND Article 15: ND Article 16: ND Article 17: ND Article 18: ND Article 19: +0.33 Article 20: +0.28 Article 21: ND Article 22: ND Article 23: ND Article 24: ND Article 25: ND Article 26: +0.17 Article 27: ND Article 28: ND Article 29: ND Article 30: ND
Quick Scan holistic editorial estimate · 1 lite evaluation
Editorial [E]: +0.60 Llama · truncated content · holistic (not section-level)
Lens divergence detected Quick scan (+0.60) vs full evaluation (+0.18) · Δ 0.42
The holistic Llama estimate and section-level Claude evaluation disagree by more than 0.15. The full evaluation is more reliable for individual story analysis.
4 stories
1. Is the passion for taxonomy in danger of dying out? (www.theguardian.com)
2 points by YeGoblynQueenne 2 hours ago | 0 comments | skipped
2.
HRCB +0.27
E +0.18
S +0.10
Claude's Corner (substack.com)
10 points by YeGoblynQueenne 1 days ago | 0 comments | hrcb v3.7 Free Expression & Digital Access
3. China's DeepSeek trained AI model on Nvidia's best chip despite US ban (www.reuters.com)
2 points by YeGoblynQueenne 2 days ago | 0 comments | skipped
4.
HRCB +0.60 L
E +0.60
Repeal the New Surveillance Laws (Investigatory Powers Act) (petition.parliament.uk)
109 points by YeGoblynQueenne 3384 days ago | 17 comments | hrcb Surveillance Reform