Skip to main content
Mythos

All memos tagged #artificial-intelligence-lexicon

Prompt Injection refers to a security vulnerability in Large Language Model (LLM)s where malicious or adversarial input is crafted to manipulate the model’s behavior. This can involve overriding...

11/4/2025

The Hierarchical Reasoning Model (HRM) is a recurrent neural architecture proposed to address limitations in reasoning exhibited by large language models. Unlike Chain of Thought Reasoning, which...

9/29/2025

Hierarchal reasoning is a brain-inspired approach to Artificial Intelligence (AI) in which Large Language Model (LLM)s process information across multiple levels of abstraction and timescales. This...

9/29/2025

Chain of thought reasoning is a method in Artificial Intelligence (AI) and cognitive science that involves articulating intermediate steps between a problem and its solution. Chain of thought...

9/29/2025