Skip to main content

How to customize attributes of traces

Oftentimes, you will want to customize various attributes of the traces you log to LangSmith.

Logging to a specific project

As mentioned in the Concepts section, LangSmith uses the concept of a Project to group traces. If left unspecified, the tracer project is set to default. You can set the LANGCHAIN_PROJECT environment variable to configure a custom project name for an entire application run. This should be done before executing your program.

export LANGCHAIN_PROJECT="My Project"

Changing the destination project at runtime

When global environment variables are too broad, you can also set the project name at program runtime. This is useful when you want to log traces to different projects within the same application.

# You can set the project name for a specific tracer instance:
from langchain.callbacks.tracers import LangChainTracer

tracer = LangChainTracer(project_name="My Project")
chain.invoke({"query": "How many people live in canada as of 2023?"}, config={"callbacks": [tracer]})


# LangChain python also supports a context manager for tracing a specific block of code.
# You can set the project name using the project_name parameter.
from langchain_core.tracers.context import tracing_v2_enabled
with tracing_v2_enabled(project_name="My Project"):
chain.invoke({"query": "How many people live in canada as of 2023?"})

Adding metadata and tags to traces

LangSmith supports sending arbitrary metadata and tags along with traces. This is useful for associating additional information with a trace, such as the environment in which it was executed, or the user who initiated it. For more information on metadata and tags, see the Concepts page. For information on how to query traces and runs by metadata and tags, see the Querying Traces page.

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful AI."),
("user", "{input}")
])
chat_model = ChatOpenAI()
output_parser = StrOutputParser()

# Tags and metadata can be configured with RunnableConfig
chain = (prompt | chat_model | output_parser).with_config({"tags": ["top-level-tag"], "metadata": {"top-level-key": "top-level-value"}})

# Tags and metadata can also be passed at runtime
chain.invoke({"input": "What is the meaning of life?"}, {"tags": ["shared-tags"], "metadata": {"shared-key": "shared-value"}})

Customizing the run name

When you create a run, you can specify a name for the run. This name is used to identify the run in LangSmith and can be used to filter and group runs. The name is also used as the title of the run in the LangSmith UI.

# When tracing within LangChain, run names default to the class name of the traced object (e.g., 'ChatOpenAI').
# (Note: this is not currently supported directly on LLM objects.)
...
configured_chain = chain.with_config({"run_name": "MyCustomChain"})
configured_chain.invoke({"query": "What is the meaning of life?"})

For more examples of with LangChain, check out the recipe on customizing run names.

Updating a run

The following fields can be updated when patching a run with the SDK or API.

  • end_time: datetime.datetime
  • error: str | None
  • outputs: dict | None
  • events: list[dict] | None

Masking inputs and outputs

In some situations, you may need to hide the inputs and outputs of your traces for privacy or security reasons. LangSmith provides a way to filter the inputs and outputs of your traces before they are sent to the LangSmith backend, so our servers never see the original values.

If you want to completely hide the inputs and outputs of your traces, you can set the following environment variables when running your application:

LANGCHAIN_HIDE_INPUTS=true
LANGCHAIN_HIDE_OUTPUTS=true

This works for both the LangSmith SDK and LangChain.

You can also customize and override this behavior for a given Client instance. This can be done by setting the hide_inputs and hide_outputs parameters on the Client object.

from langchain_core.tracers.context import tracing_v2_enabled
from langchain_openai import ChatOpenAI
from langsmith import Client


def filter_inputs(inputs: dict):
# You can define custom filtering here
return {}


def filter_outputs(outputs: dict):
# You can define custom filtering here
return {}


llm = ChatOpenAI()

# You can configure tracing using the context manager below
# or by directly creating a LangChainTracer object
with tracing_v2_enabled(
"test-filtering",
client=Client(hide_inputs=filter_inputs, hide_outputs=filter_outputs),
) as cb:
llm.invoke("Say foo")
# The linked run will have its metadata present, but the inputs will be hidden
print(cb.get_run_url())

with tracing_v2_enabled("test-filtering", client=Client()) as cb:
llm.invoke("Say bar")
# The linked run will not have hidden inputs and outputs
print(cb.get_run_url())

Help us out by providing feedback on this documentation page: