Tracing Quick Start
You can get started with LangSmith tracing using either LangChain, the Python SDK, the TypeScript SDK, or the API. The following sections provide a quick start guide for each of these options.
- LangChain
- Python SDK
- TypeScript SDK
- API
1. Install or upgrade LangChain
- pip
- yarn
- npm
- pnpm
pip install langchain_openai langchain_core
yarn add @langchain/openai @langchain/core
npm install @langchain/openai @langchain/core
pnpm add @langchain/openai @langchain/core
2. Create an API key
Next, create an API key by logging in and navigating to the settings page.
3. Configure your environment
- Shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
# The below examples use the OpenAI API, so you will need
export OPENAI_API_KEY=<your-openai-api-key>
4. Log a trace
No extra code is needed to log a trace to LangSmith. Just run your LangChain code as you normally would.
- Python
- TypeScript
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant. Please respond to the user's request only based on the given context."),
("user", "Question: {question}\nContext: {context}")
])
model = ChatOpenAI(model="gpt-3.5-turbo")
output_parser = StrOutputParser()
chain = prompt | model | output_parser
question = "Can you summarize this morning's meetings?"
context = "During this morning's meeting, we solved all world conflict."
chain.invoke({"question": question, "context": context})
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant. Please respond to the user's request only based on the given context."],
["user", "Question: {question}\nContext: {context}"],
]);
const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo" });
const outputParser = new StringOutputParser();
const chain = prompt.pipe(model).pipe(outputParser);
const question = "Can you summarize this morning's meetings?"
const context = "During this morning's meeting, we solved all world conflict."
await chain.invoke({ question: question, context: context });
5. View the trace
By default, the trace will be logged to the project with the name default
. You can change the project you log to by following the instructions here. An example of a trace logged using the above code is made public and can be viewed here.
1. Install the LangSmith library
Start by installing the Python library.
pip install langsmith
2. Create an API key
Next, create an API key by logging in and navigating to the settings page.
3. Configure your environment
- Shell
export LANGCHAIN_API_KEY=<your-api-key>
# The below examples use the OpenAI API, so you will need
export OPENAI_API_KEY=<your-openai-api-key>
4. Log a trace
We provide multiple ways to log traces to LangSmith. Below, we'll highlight how to use our most explicit RunTree
API. See more in the Integrations section.
# To run the example below, ensure the environment variable OPENAI_API_KEY is set
import openai
from langsmith.run_trees import RunTree
# This can be a user input to your app
question = "Can you summarize this morning's meetings?"
# Create a top-level run
pipeline = RunTree(
name="Chat Pipeline",
run_type="chain",
inputs={"question": question}
)
# This can be retrieved in a retrieval step
context = "During this morning's meeting, we solved all world conflict."
messages = [
{ "role": "system", "content": "You are a helpful assistant. Please respond to the user's request only based on the given context." },
{ "role": "user", "content": f"Question: {question}\nContext: {context}"}
]
# Create a child run
child_llm_run = pipeline.create_child(
name="OpenAI Call",
run_type="llm",
inputs={"messages": messages},
)
# Generate a completion
client = openai.Client()
chat_completion = client.chat.completions.create(
model="gpt-3.5-turbo", messages=messages
)
# End the runs and log them
child_llm_run.end(outputs=chat_completion)
child_llm_run.post()
pipeline.end(outputs={"answer": chat_completion.choices[0].message.content})
pipeline.post()
5. View the trace
By default, the trace will be logged to the project with the name default
. You can change the project you log to by following the instructions here. An example of a trace logged using the above code is made public and can be viewed here.
1. Install the LangSmith library
Start by installing the TypeScript library.
npm install langsmith
yarn add langsmith
2. Create an API key
Next, create an API key by navigating to the settings page.
3. Configure your environment
- Shell
export LANGCHAIN_API_KEY=<your-api-key>
# The below examples use the OpenAI API, so you will need
export OPENAI_API_KEY=<your-openai-api-key>
4. Log a Trace
We provide multiple ways to log traces to LangSmith. Below, we'll highlight how to use our most explicit RunTree
API. See more in the Integrations section.
// To run the example below, ensure the environment variable OPENAI_API_KEY is set
import OpenAI from "openai";
import { RunTree } from "langsmith";
// This can be a user input to your app
const question = "Can you summarize this morning's meetings?";
const pipeline = new RunTree({
name: "Chat Pipeline",
run_type: "chain",
inputs: { question }
});
// This can be retrieved in a retrieval step
const context = "During this morning's meeting, we solved all world conflict.";
const messages = [
{ role: "system", content: "You are a helpful assistant. Please respond to the user's request only based on the given context." },
{ role: "user", content: `Question: ${question}
Context: ${context}` }
];
// Create a child run
const childRun = await pipeline.createChild({
name: "OpenAI Call",
run_type: "llm",
inputs: { messages },
});
// Generate a completion
const client = new OpenAI();
const chatCompletion = await client.chat.completions.create({
model: "gpt-3.5-turbo",
messages: messages,
});
// End the runs and log them
childRun.end(chatCompletion);
await childRun.postRun();
pipeline.end({ outputs: { answer: chatCompletion.choices[0].message.content } });
await pipeline.postRun();
5. View the trace
By default, the trace will be logged to the project with the name default
. You can change the project you log to by following the instructions here. An example of a trace logged using the above code is made public and can be viewed here.
1. Create an API key
Create an API Key by navigating to the settings page.
2. Log a trace
Log a trace using the LangSmith API.
Here, we'll show you to use the requests
library in Python to log a trace, but you can use any HTTP client in any language.
# To run the example below, ensure the environment variable OPENAI_API_KEY is set
import openai
import requests
from datetime import datetime
from uuid import uuid4
def post_run(run_id, name, run_type, inputs, parent_id=None):
"""Function to post a new run to the API."""
data = {
"id": run_id.hex,
"name": name,
"run_type": run_type,
"inputs": inputs,
"start_time": datetime.utcnow().isoformat(),
}
if parent_id:
data["parent_run_id"] = parent_id.hex
requests.post(
"https://api.smith.langchain.com/runs",
json=data,
headers=headers
)
def patch_run(run_id, outputs):
"""Function to patch a run with outputs."""
requests.patch(
f"https://api.smith.langchain.com/runs/{run_id}",
json={
"outputs": outputs,
"end_time": datetime.utcnow().isoformat(),
},
headers=headers,
)
# Send your API Key in the request headers
headers = {"x-api-key": "<YOUR API KEY>"}
# This can be a user input to your app
question = "Can you summarize this morning's meetings?"
# This can be retrieved in a retrieval step
context = "During this morning's meeting, we solved all world conflict."
messages = [
{"role": "system", "content": "You are a helpful assistant. Please respond to the user's request only based on the given context."},
{"role": "user", "content": f"Question: {question}\nContext: {context}"}
]
# Create parent run
parent_run_id = uuid4()
post_run(parent_run_id, "Chat Pipeline", "chain", {"question": question})
# Create child run
child_run_id = uuid4()
post_run(child_run_id, "OpenAI Call", "llm", {"messages": messages}, parent_run_id)
# Generate a completion
client = openai.Client()
chat_completion = client.chat.completions.create(model="gpt-3.5-turbo", messages=messages)
# End runs
patch_run(child_run_id, chat_completion.dict())
patch_run(parent_run_id, {"answer": chat_completion.choices[0].message.content})
3. View the trace
By default, the trace will be logged to the project with the name default
. You can change the project you log to by following the instructions here. An example of a trace logged using the above code is made public and can be viewed here.