kernel_entropy

Kernel Language Entropy (KLE) for measuring semantic uncertainty in LLM generations.

This package implements the KLE algorithm from arXiv:2405.20003.

class kernel_entropy.HydraGenerator(gamma: int = 4, beta: float = 1.0, max_new_tokens: int = 200)[source]

Bases: object

PoE-backed batched generation for KLE.

generate_batch produces one response per seed by calling PoE.generate_with_cache in pure-generation mode (is_mcq=False).

generate_batch(prompt: str, seeds: list[int], temperature: float = 0.98, verbose: bool = False) list[str][source]

Generate one response per seed.

Seeds the torch RNG before each PoE call so the draft-head pick and multinomial draws inside generate_with_cache are reproducible. Forks the RNG so per-seed seeding does not leak into caller state.

class kernel_entropy.ModernBERTScorer(sentences: list[str], model_id: str = 'tasksource/ModernBERT-large-nli', model: AutoModelType | None = None, tokenizer: TokenizersBackend | None = None)[source]

Bases: object

Pairwise NLI scoring using ModernBERT-large-nli.

Computes similarity matrix W for Kernel Language Entropy.

compute(verbose: bool = False) Tensor | tuple[Tensor, dict[tuple[int, int], dict[str, dict[str, float]]]][source]

Compute pairwise similarity matrix W.

For each pair (i, j) where i < j, computes:

W[i,j] = W[j,i] = weighted(NLI(i->j)) + weighted(NLI(j->i))

Parameters:

verbose – If True, returns (W, raw_probabilities) tuple

Returns:

N x N symmetric similarity matrix W with W[i,j] in [0, 2], diagonal = 0. If verbose=True, returns (W, raw_probabilities) tuple.

compute_against_baseline(baseline_idx: int = 0) Tensor[source]

Compute KLE similarity between sentences[baseline_idx] and every other sentence.

Runs exactly 2*(N-1) NLI inferences in one forward pass - only the pairs involving the baseline, not the full pairwise matrix.

Returns:

1-D tensor of length N where result[j] is the bidirectional KLE score between sentences[baseline_idx] and sentences[j], and result[baseline_idx] = 0.

get_nli_probabilities(nli_inputs: list[tuple[str, str]]) Tensor[source]

Get raw NLI probabilities for given (premise, hypothesis) pairs.

kernel_entropy.compute_kle(prompt: str, n_generations: int = 10, temperature: float = 0.98, lengthscale_t: float = 1.0, verbose: bool = False) float[source]

Compute Kernel Language Entropy for a prompt.

Pipeline:
  1. Generate N responses via PoE (pure generation, no uncertainty head)

  2. Compute pairwise NLI similarity matrix W

  3. Calculate Von Neumann Entropy from W

Parameters:
  • prompt – Input prompt for generation

  • n_generations – Number of responses to generate (default: 10)

  • temperature – Generation temperature (default: 0.98)

  • lengthscale_t – Heat kernel lengthscale (default: 1.0)

  • verbose – Print each response after generation (default: False)

Returns:

Von Neumann Entropy (float). Higher = more semantic uncertainty.

kernel_entropy.kle_from_similarity(W: Tensor, t: float = 1.0) float[source]

Compute Kernel Language Entropy from similarity matrix.

Parameters:
  • W – N×N symmetric similarity matrix on CUDA (from NLI scoring)

  • t – Heat kernel lengthscale (default: 1.0)

Returns:

Von Neumann Entropy (float)

Modules

entropy

Kernel Language Entropy calculation.

generation

PoE text generation for Kernel Language Entropy.

nli

ModernBERT NLI scoring for Kernel Language Entropy.

pipeline

KLE Pipeline - end-to-end Kernel Language Entropy computation.