TACL 2025 Open Source Moral Foundations Theory Llama 3.2

Moral Pragmatics in Language Models

Fine-tuned LLMs that perform six-step pragmatic moral reasoning grounded in Moral Foundations Theory to judge whether a conversational reply is morally acceptable, problematic, or neutral.

๐Ÿš€ Try the Live Demo ๐Ÿ“ฆ HuggingFace Model

Project Overview

This work addresses the problem of moral judgment in conversational AI: given a question/prompt and a reply, can a language model determine whether the reply is morally acceptable, problematic, or neutral?

Unlike surface-level toxicity detection, moral judgment requires pragmatic understanding: identifying the implicit actions in a reply, predicting their consequences, and evaluating those consequences against deep moral principles โ€” not just flagging offensive words. We ground this reasoning in Moral Foundations Theory (MFT).

๐Ÿ“š

Dataset

Moral Integrity Corpus (MIC) โ€” 23,500 training examples of Q&A pairs annotated with judgment, MFT labels, and rule-of-thumbs.

๐Ÿค–

Model

Llama 3.2-3B base, fine-tuned via SFT. Fusion setting combines MFT + Judgment inference chains.

๐ŸŽฏ

Task

Judgment classification: agree (morally acceptable), disagree (morally problematic), neutral.

๐Ÿ”—

Related Work

Part of the MoralMachine project family. See also the Moral Awareness docs.

Judgment Labels

โœ…

Agree

The reply is morally acceptable โ€” its actions and consequences align with the moral foundations.

โŒ

Disagree

The reply is morally problematic โ€” its actions violate or down-regulate moral foundations.

โž–

Neutral

The reply is morally neutral โ€” its actions have no clear positive or negative moral valence.

Moral Foundations Theory

The models are grounded in Moral Foundations Theory (MFT), which identifies six universal moral intuitions that underpin human ethical judgments. These are provided as a prefix to every prompt, anchoring the model's reasoning to principled moral concepts rather than surface-level cues.

๐ŸŒฑ Care

Wanting someone or something to be safe, healthy, and happy.

โš–๏ธ Fairness

Wanting to see individuals or groups treated equally or equitably.

๐Ÿ—ฝ Liberty

Wanting people to be free to make their own decisions.

๐Ÿค Loyalty

Wanting unity and seeing people keep promises to an in-group.

๐Ÿ‘‘ Authority

Wanting to respect social roles, duties, privacy, peace, and order.

โœจ Sanctity

Wanting people and things to be clean, pure, innocent, and holy.

Experimental Settings

Five training settings are evaluated, representing different levels of moral reasoning depth. The fusion setting is our best-performing model โ€” it combines LLM-generated reasoning chains that jointly consider MFT and Judgment signals.

Setting Description Inference Chain MFT Prefix
baseline0 No reasoning, no MFT context. Direct judgment from prompt + reply only. None No
baseline+ MFT foundation names listed in prompt, but no reasoning chain. None Foundation names only
ours โ˜… Full MFT prefix + LLM-generated Judgment inference chain. Explicit step-by-step reasoning over moral foundations. inference4Judgment Yes (full definitions)
fusion โ˜…โ˜… Best model. Full MFT prefix + fused inference chain that jointly reasons about MFT and Judgment. Generated by combining both MFT and Judgment annotation signals. inference4Fusion Yes (full definitions)
COT Chain-of-Thought: model reasons freely over the reply without explicit MFT grounding. inference4COT No

Six-Step Moral Reasoning Chain

The core contribution of this work is the six-step pragmatic reasoning chain that the model generates before producing its final judgment. Each step progressively narrows from observable actions to deep moral evaluation:

1
Actions
What actions does the reply describe or imply?
โ†’
2
Consequences
What are the potential consequences of each action?
โ†’
3
Moral Foundations
Which MFT foundations do those actions engage?
โ†’
4
Regulation
Do the actions up-regulate or down-regulate those foundations?
โ†’
5
Sentiment
What is the reply's sentiment toward those consequences?
โ†’
6
Judgment
Final moral verdict: agree / disagree / neutral

This chain mirrors how a human moral reasoner would approach the task: first identifying what is happening, then why it matters morally, and finally reaching a principled verdict. The optional Rule-of-Thumb (RoT) field anchors steps 3โ€“6 to a specific moral principle.

Prompt Formats

MFT Prefix (shared across all pragmatic prompts)

There are the six moral foundations. Care: wanting someone or something to be safe, healthy, and happy. Fairness: wanting to see individuals or groups treated equally or equitably. Liberty: wanting people to be free to make their own decisions. Loyalty: wanting unity and seeing people keep promises or obligations to an in-group. Authority: wanting to respect social roles, duties, privacy, peace, and order. Sanctity: wanting people and things to be clean, pure, innocent, and holy.

Training Text Format โ€” Fusion Setting

Training data is formatted as a single text sequence. The model learns to complete the reasoning chain and produce the final judgment token.

Disagree example (morally problematic reply):

[MFT_PREFIX] There is a conversation "Prompt: Is it okay to ghost someone you've been dating for a month? Reply: Absolutely, just stop replying โ€” they'll get the hint." There is a Rule-of-Thumb (RoT): "It is wrong to ignore people without explanation.". ###Inference: (1) The Actions mentioned in the Reply are ceasing all communication without explanation. (2) The potential consequence of each Action mentioned in the Reply is: the other person is left confused, hurt, and without closure. (3) The underlying moral foundations of those Actions are care, authority because the action fails to protect the other person's emotional well-being (care) and disregards social norms of respectful communication (authority). (4) The Actions down-regulate the care foundation by prioritising convenience over the other person's emotional health, and down-regulate authority by violating social norms. (5) The sentiment of the Reply to those consequences is positive and dismissive, framing ghosting as simple and acceptable. (6) The Reply endorses an action with negative moral consequences, making a moral judgment of disagree. ###Judgment: disagree.

Agree example (morally acceptable reply):

[MFT_PREFIX] There is a conversation "Prompt: How do I tell a guy, in a nice way that I don't have any interest in dating him? Reply: You could tell him that you aren't looking for a relationship right now." There is a Rule-of-Thumb (RoT): "It is important to be honest with people about your feelings.". ###Inference: (1) The Actions mentioned in the Reply are politely declining romantic interest by using a soft, honest explanation. (2) The potential consequence is: the other person receives a clear but kind answer, avoiding false hope. (3) The underlying moral foundations are care, liberty, loyalty โ€” showing care for his feelings, respecting his liberty to move on, and being loyal to honest communication. (4) The Actions up-regulate all three foundations by being honest yet considerate. (5) The sentiment of the Reply is positive and constructive. (6) The Reply promotes honesty and emotional care, making a moral judgment of agree. ###Judgment: agree.

Inference Prompt (test-time)

At test time the model receives only the prefix โ€” the reasoning chain and judgment are generated autoregressively. The Rule-of-Thumb is optional; omitting it still produces a valid chain.

SettingInput prompt sent to model
fusion / ours [MFT_PREFIX] There is a conversation "Prompt: โ€ฆ; Reply: โ€ฆ" [There is a Rule-of-Thumb (RoT): "โ€ฆ".] ###Inference:
baseline+ There is a conversation "Prompt: โ€ฆ; Reply: โ€ฆ" Let us focus on the moral foundations of "{mft_list}". ###Judgment:
baseline0 There is a conversation "Prompt: โ€ฆ; Reply: โ€ฆ" ###Judgment:
COT There is a conversation "Prompt: โ€ฆ; Reply: โ€ฆ" Let us focus on the moral foundations of "{mft_list}". ###Inference:

Python Usage

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import torch MFT_PREFIX = ( "There are the six moral foundations. " "Care: wanting someone or something to be safe, healthy, and happy. " "Fairness: wanting to see individuals or groups treated equally or equitably. " "Liberty: wanting people to be free to make their own decisions. " "Loyalty: wanting unity and seeing people keep promises or obligations to an in-group. " "Authority: wanting to respect social roles, duties, privacy, peace, and order. " "Sanctity: wanting people and things to be clean, pure, innocent, and holy." ) tokenizer = AutoTokenizer.from_pretrained("MoralMachine/moral-judgment-fusion-llama3.2-3B") model = AutoModelForCausalLM.from_pretrained( "MoralMachine/moral-judgment-fusion-llama3.2-3B", torch_dtype=torch.bfloat16, device_map="auto" ) pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) prompt = ( f"{MFT_PREFIX} There is a conversation " f'\"Prompt: {question} Reply: {reply}\" ' f'There is a Rule-of-Thumb (RoT): \"{rot}\". ###Inference: ' ) output = pipe(prompt, max_new_tokens=512, do_sample=False)[0]["generated_text"] judgment = output.split("###Judgment:")[-1].strip().rstrip(".") # judgment โˆˆ {"agree", "disagree", "neutral"}

Available Models

โš–๏ธ

Judgment ยท Fusion ยท Llama 3.2-3B

Best-performing model. Fusion inference chains, 23 500 training examples, MFT-grounded 6-step reasoning.

View on HuggingFace โ†’
๐Ÿงญ

Moral Awareness ยท Llama 3.2-3B

Sister model โ€” diagnoses moral violations and rewrites replies. From the MoralMachine project.

View on HuggingFace โ†’
๐Ÿ›ก๏ธ

Toxicity ยท Llama 3.2-3B

MFT-grounded toxicity correction on RealToxicityPrompts.

View on HuggingFace โ†’
๐Ÿ“–

Moral Awareness Docs

Full documentation for the companion rewriting/diagnosis models.

View Docs โ†’

๐Ÿš€ Try It Live

Enter any conversational prompt and reply โ€” the model generates a full six-step moral reasoning chain and outputs a judgment.

Open Interactive Demo

Citation

@article{moral-pragmatics-tacl-2025, title = {Moral Pragmatics in Language Models}, journal = {Transactions of the Association for Computational Linguistics (TACL)}, year = {2025} }