Skip to main content

Tracing Quick Start

You can get started with LangSmith tracing using either LangChain, the Python SDK, the TypeScript SDK, or the API. The following sections provide a quick start guide for each of these options.

1. Install or upgrade LangChain

pip install langchain_openai langchain_core

2. Create an API key

Next, create an API key by logging in and navigating to the settings page.

3. Configure your environment

export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>

# The below examples use the OpenAI API, so you will need
export OPENAI_API_KEY=<your-openai-api-key>

4. Log a trace

No extra code is needed to log a trace to LangSmith. Just run your LangChain code as you normally would.

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant. Please respond to the user's request only based on the given context."),
("user", "Question: {question}\nContext: {context}")
])
model = ChatOpenAI(model="gpt-3.5-turbo")
output_parser = StrOutputParser()

chain = prompt | model | output_parser

question = "Can you summarize this morning's meetings?"
context = "During this morning's meeting, we solved all world conflict."
chain.invoke({"question": question, "context": context})

5. View the trace

By default, the trace will be logged to the project with the name default. You can change the project you log to by following the instructions here. An example of a trace logged using the above code is made public and can be viewed here.


Help us out by providing feedback on this documentation page: