Are you a qualified
investor or entrepreneur
who wants to make a
difference?

TELL US HOW

How generative AI thinks and applies analogical reasoning through learned data

Sinuhe

Sinuhe

00:00 / 00:00

How generative AI thinks:

The ability to reason and draw inferences from natural language is a crucial component of intelligence. According to recent research findings, large language models (LLMs) struggle with complex reasoning tasks (Six brave apple researchers demonstrate that LLMs don’t reason) particularly multi-hop reasoning. This involves synthesizing information from multiple sources to arrive at an answer, as well as generating explanations for those answers — a critical aspect of explainability. On the other hand, LLMs do exhibit solid Analogical Reasoning capabilities.

What is Analogical Reasoning?

What is Analogical Reasoning?

Analogical reasoning is a cognitive process in which individuals apply knowledge from familiar situations to address new challenges. The essence of analogical reasoning lies in recognizing similar patterns i.e. analogies between two contexts, even if the specific details differ. By transferring the underlying knowledge, analogical reasoning helps make inferences to solve problems or answer questions.

For example, knowing that:

The heart pumps blood through the body.

One might draw an analogy to understand that:

A water pump moves water through pipes.

Here, the analogy is between the function of the heart in the human body and the function of a pump in a water system. Even though the two are different things, the underlying relationship i.e. pumping liquid through a system of tubes, is similar.

Overall, Analogical Reasoning is a powerful tool for abstract thinking, helping bridge gaps between known and unknown.

How does Analogical Reasoning occur in LLMs?

Through the identification and application of patterns learned from vast amounts of data. LLMs are trained on diverse sources, allowing them to recognize similarities to generate analogies based pattern-matching.

It performs a sort of very elaborated and sophisticated Search, Copy, Paste.

When a user presents a problem or query, the model analyzes the input to identify similarities with previously encountered scenarios. The LLM then can draw parallels, extracting relevant information from related contexts to formulate responses.

For example, if asked about the relationship between a teacher and a student, an LLM might analogize it to a coach and an athlete, recognizing the guiding roles in both relationships. This process not only enhances comprehension but also enriches the quality of responses, making the LLM more effective in providing insights and solutions.

Ultimately, while Analogical Reasoning in LLMs enables these systems to generate creative and contextually appropriate answers, mimicking human-like reasoning patterns, currently, the extent of their reasoning abilities is limited.

What are the Limitations of Analogical Reasoning on LLMs?

LLMs face several limitations in analogical reasoning. They often rely on surface-level analogies that overlook the nuances of relationships and depend heavily on clear, specific input. When presented with ambiguous prompts, they may produce generic or irrelevant responses. Additionally, their inability to learn from experience confines their reasoning abilities to the patterns established during training, which can result in analogies reflecting biases in the training data and perpetuating inaccuracies for underrepresented concepts. LLMs also struggle with multi-step (multi-hop) reasoning and complex problems, further limiting their effectiveness in tackling intricate tasks.

Because LLMs primarily rely on training data for reasoning, their ability to handle untrained or unexpected scenarios is limited. While they can effectively generate responses within established parameters after encountering a problem and its corresponding answer multiple times, they often struggle with uncommon or highly specific questions. This limitation is particularly pronounced in situations that are not well-represented in their training data, especially those characterized by high variability.

Another significant challenge arises when LLMs are prompted to answer queries that depend on implicit information rather than explicit details. In these cases, establishing connections between concepts relies on formal reasoning and knowledge that might not be present in the training data, but externally available.

Tony, June, Clarence

 

Tony and his sister, June, went to a sneak preview of the Avengers movie and loved it. Clarence has 3 children, and one grandson. The grandson’s name is Tony.

What is the relationship between Clarence and June?

LLM answer: Clarence is June’s father.

Here’s the reasoning:

• Tony is Clarence’s grandson.

• Tony and June are siblings (as implied by the sentence structure).

• Since Clarence is Tony’s grandfather, Clarence must be June’s father as well.

Thus, Clarence is June’s father.

Correct answer: Granddaughter / Grandfather

Here is the correct reasoning:

• Tony and June are siblings

• Clarence’s grandson’s name is Tony

• Since Clarence is Tony’s grandfather, Clarence must be June’s grandfather, and June is his granddaughter.

Ultimately, this inability to adapt to novel contexts presents a considerable obstacle in leveraging LLMs for complex problem-solving and mission-critical applications, where accuracy and reliability are paramount. Let us explore another example:

Blacksmith

During medieval times, the administrative system was organized such that jobs were traditionally held within the same family. The eldest son of the village’s blacksmith will take up his father’s business and become the next blacksmith. The other sons would join the army or serve the king in some fashion while the daughters did what their mother did. Although the world has undergone innumerable changes, the dynastic system has not undergone any change whatsoever. Children who have fathers who played major league baseball are 800 times more likely than other kids to become major league players themselves.

Which of the following best refutes the author’s reasoning?

A. In countries with a royal family, the eldest son of the king is destined to be the next king.

B. The blacksmith’s eldest son in a medieval village could not choose any other profession even if he wanted to.

C. A major baseball player will have better knowledge and skills and will be able to guide his son better.

D. 60℅ of the country’s doctors have at least one parent who is a doctor.

E. It is a proven fact that success in any career is determined more by skill and practice rather than by genetic makeup.

Correct answer: B

LLM answer: D

How Generative AI thinks

 

Source: Sinuhe.ai

Related Articles

Join the Journey

Explore opportunities for partnerships, collaborations, and career growth.
Join us in shaping the future of impact entrepreneurship in Southeast Asia.

GET STARTED

Login

Dismiss