Modern artificial intelligence (AI) systems critically lack the ability to reflect on their own behavior, reasoning processes, and generated output. For example, despite recent advancements in large language models (LLMs), the appearance of reflection in such systems is a linguistic trick rather than a cognitive competence. This deficiency can cause significant challenges, particularly in contexts where safety and reliability are important, such as healthcare and security applications, and also in social situations where complex contexts drive expected behavior. Building on previous work in computational self-awareness and reflection in adaptive systems, in this paper, we propose a reflective agent architecture that incorporates formal models of social expectations and self-simulation mechanisms with LLMs This architecture enhances LLM-based systems to reflect on their decisions and outputs, evaluating alignment with expected behaviors. Furthermore, it enables agents to internally simulate potential actions and evaluate their consequences. Using the expectation event calculus (EEC), the system formally represents expectations, events, and derived outcomes, supporting systematic self-evaluation. Concurrently, self-simulation allows the agent to introspectively predict and analyze possible outcomes and refine its decision when necessary. Our results demonstrate enhanced alignment with human expectations, highlighting the architecture’s promise of greater social sensitivity in complex scenarios that require robust and trustworthy AI interactions.