Skip to main content
@arizeai/phoenix-otel re-exports the OpenTelemetry trace, context, and SpanStatusCode APIs, plus the @arizeai/openinference-core attribute builders, OITracer, and utility helpers. For most use cases, prefer the tracing helpers covered in Tracing Helpers. Use manual spans when you need exact span timing, low-level OpenTelemetry interop, or explicit control over which attributes are recorded.

Relevant Source Files

  • src/index.ts re-exports the manual tracing surface
  • src/register.ts configures the Phoenix exporter and provider
  • node_modules/@arizeai/openinference-core/src/helpers/attributeHelpers.ts implements the attribute builders
  • node_modules/@arizeai/openinference-core/src/trace/trace-config/OITracer.ts implements redaction-aware tracing
If you do not need low-level OpenTelemetry control, use the helper wrappers first:
import {
  context,
  register,
  setMetadata,
  setSession,
  withSpan,
} from "@arizeai/phoenix-otel";

register({ projectName: "support-bot" });

const retrieveDocs = withSpan(
  async (query: string) => {
    const response = await fetch(`/api/search?q=${query}`);
    return response.json();
  },
  { name: "retrieve-docs", kind: "RETRIEVER" }
);

const generateAnswer = withSpan(
  async (query: string, docs: unknown[]) => {
    return `Answer based on ${docs.length} documents`;
  },
  { name: "generate-answer", kind: "LLM" }
);

const ragPipeline = withSpan(
  async (query: string) => {
    const docs = await retrieveDocs(query);
    return generateAnswer(query, docs);
  },
  { name: "rag-pipeline", kind: "CHAIN" }
);

await context.with(
  setMetadata(
    setSession(context.active(), { sessionId: "session-abc-123" }),
    { environment: "production" }
  ),
  () => ragPipeline("What is Phoenix?")
);

Raw OpenTelemetry Spans

Use raw spans when you need full control over timing and attributes:
import {
  SpanStatusCode,
  register,
  trace,
} from "@arizeai/phoenix-otel";

register({ projectName: "support-bot" });

const tracer = trace.getTracer("support-bot");

await tracer.startActiveSpan("lookup-customer", async (span) => {
  try {
    span.setAttribute("customer.id", "cust_123");
    await Promise.resolve();
  } catch (error) {
    span.recordException(error as Error);
    span.setStatus({ code: SpanStatusCode.ERROR });
    throw error;
  } finally {
    span.end();
  }
});
If you need context-propagated session, user, metadata, or prompt-template attributes on a plain tracer span, copy them explicitly with getAttributesFromContext() as shown on Context Attributes.

Attribute Helper APIs

Use the attribute helpers to build OpenInference-compatible attribute sets for LLM, retriever, embedding, tool, input, and output spans.
import { getLLMAttributes, trace } from "@arizeai/phoenix-otel";

const tracer = trace.getTracer("llm-service");

tracer.startActiveSpan("llm-inference", (span) => {
  span.setAttributes(
    getLLMAttributes({
      provider: "openai",
      modelName: "gpt-4o-mini",
      inputMessages: [{ role: "user", content: "What is Phoenix?" }],
      outputMessages: [{ role: "assistant", content: "Phoenix is..." }],
      tokenCount: { prompt: 12, completion: 44, total: 56 },
      invocationParameters: { temperature: 0.2 },
    })
  );
  span.end();
});
Common helpers:
  • getLLMAttributes
  • getEmbeddingAttributes
  • getRetrieverAttributes
  • getToolAttributes
  • getMetadataAttributes
  • getInputAttributes
  • getOutputAttributes
  • defaultProcessInput
  • defaultProcessOutput

Trace Config And Redaction

OITracer wraps an OpenTelemetry tracer and applies OpenInference trace masking rules before attributes are written. It also merges propagated context attributes automatically.
import {
  OITracer,
  OpenInferenceSpanKind,
  trace,
  withSpan,
} from "@arizeai/phoenix-otel";

const tracer = new OITracer({
  tracer: trace.getTracer("my-service"),
  traceConfig: {
    hideInputs: true,
    hideOutputText: true,
    hideEmbeddingVectors: true,
    base64ImageMaxLength: 8_000,
  },
});

const safeLLMCall = withSpan(
  async (prompt: string) => `model response for ${prompt}`,
  {
    tracer,
    kind: OpenInferenceSpanKind.LLM,
    name: "safe-llm-call",
  }
);
You can also configure masking via environment variables:
  • OPENINFERENCE_HIDE_INPUTS
  • OPENINFERENCE_HIDE_OUTPUTS
  • OPENINFERENCE_HIDE_INPUT_MESSAGES
  • OPENINFERENCE_HIDE_OUTPUT_MESSAGES
  • OPENINFERENCE_HIDE_INPUT_IMAGES
  • OPENINFERENCE_HIDE_INPUT_TEXT
  • OPENINFERENCE_HIDE_OUTPUT_TEXT
  • OPENINFERENCE_HIDE_EMBEDDING_VECTORS
  • OPENINFERENCE_BASE64_IMAGE_MAX_LENGTH
  • OPENINFERENCE_HIDE_PROMPTS

Utility Helpers

The package also re-exports small safety utilities:
  • withSafety({ fn, onError? }) wraps a function and returns null if it throws
  • safelyJSONStringify(value) wraps JSON.stringify
  • safelyJSONParse(value) wraps JSON.parse
import {
  safelyJSONParse,
  safelyJSONStringify,
  withSafety,
} from "@arizeai/phoenix-otel";

const safeDivide = withSafety({
  fn: (a: number, b: number) => a / b,
});

const serialized = safelyJSONStringify({ ok: true });
const parsed = safelyJSONParse(serialized ?? "{}");
const result = safeDivide(4, 2);

Source Map

  • src/index.ts
  • src/register.ts
  • node_modules/@arizeai/openinference-core/src/helpers/attributeHelpers.ts
  • node_modules/@arizeai/openinference-core/src/trace/trace-config/OITracer.ts
  • node_modules/@arizeai/openinference-core/src/utils/index.ts