Autonomous agents that operate and learn in complex environments must continually balance multiple objectives, including efficiency and risk. Trial-and-error reinforcement learning can reduce the efficiency and safety of agents due to the need to incorporate exploration along with the exploitation of learnt knowledge. Internal simulation frameworks, such as Winfield’s consequence engine, have demonstrated the value of using internal simulations in agents to predict the outcomes of actions without requiring physical commitment. However, the framework lacks a formal mechanism for storing knowledge learned from these internal simulations, meaning agents are often stuck repeatedly simulating scenarios they have encountered previously. This paper introduces Popperian Expectations and a novel architecture that extends Winfield’s framework by enabling agents to create, update, and utilise causal expectations learned from internal simulations. Using the Expectation Event Calculus (EEC), the agent can form interpretable causal expectations based on internal simulations. This paper explores how Popperian Expectations can be used to enable agents to reason reflectively and adapt to complex environments.