Get your LLM app from prototype to production

LangSmith is an all-in-one developer platform for every step of the LLM-powered application lifecycle, whether you’re building with LangChain or not.
Debug, collaborate, test, and monitor your LLM applications.

The platform for your LLM development lifecycle

LLM-apps are powerful, but have peculiar characteristics. The non-determinism, coupled with unpredictable, natural language inputs, make for countless ways the system can fall short. Traditional engineering best practices need to be re-imagined for working with LLMs, and LangSmith supports all phases of the development lifecycle.

Sign Up

Develop with greater visibility

Unexpected results happen all the time with LLMs. With full visibility into the entire sequence of calls, you can spot the source of errors and performance bottlenecks in real-time with surgical precision. Debug. Experiment. Observe. Repeat. Until you’re happy with your results.

Go to Docs

Collaborate with teammates to get app behavior just right.

Building LLM-powered applications requires a close partnership between developers and subject matter experts.

Traces

Easily share a chain trace with colleagues, clients, or end users, bringing explainability to anyone with the shared link.

001

Hub

Use LangSmith Hub to craft, version, and comment on prompts. No engineering experience required.

002

Annotation Queues

Try out LangSmith Annotation Queues to add human labels and feedback on traces.

003

Datasets

Easily collect examples, and construct datasets from production data or existing sources. Datasets can be used for evaluations, few-shot prompting, and even fine-tuning.

004

Get tips for evaluating the performance of your LLM app,
from design to production.

Monitor cost, latency, quality.

See what’s happening with your production application, so you can take action when needed or rest assured while your chains and agents do the hard work.

Go to Docs
User feedback collection
Advanced filtering
Online 
auto-evaluation
Cost tracking
Inspect anomalies and errors
Spot latency spikes
User feedback collection
Advanced filtering
Online 
auto-evaluation
Cost tracking
Inspect anomalies and errors
Spot latency spikes

LangSmith FAQs

Can I use LangSmith if I don’t use LangChain or LangGraph?

Yes! Many companies who don’t build with LangChain/LangGraph use LangSmith. You can log traces to LangSmith via the Python SDK, the TypeScript SDK, or the API. See here for more information.

How easy is it to start using LangSmith if I use LangChain or LangGraph?

Getting started on LangSmith requires just two environment variables in your LangChain or LangGraph code. See how to send traces from your LangGraph agent or your LangChain app.

My application isn’t written in Python or TypeScript. Will LangSmith be helpful?

Yes, you can log traces to LangSmith using a standard OpenTelemetry client to access all LangSmith features, including tracing, running evals, and prompt engineering. See the docs.

How can LangSmith help with observability and evaluation?

LangSmith traces contain the full information of all the inputs and outputs of each step of the application, giving users full visibility into their agent or LLM app behavior. LangSmith also allows users to instantly run evals to assess agent or LLM app performance — including LLM-as-Judge evaluators for auto-scoring and the ability to attach human feedback. Learn more.

I can’t have data leave my environment. Can I self-host LangSmith?

Yes, we allow customers to self-host LangSmith on our enterprise plan. We deliver the software to run on your Kubernetes cluster, and data will not leave your environment. For more information, check out our documentation.

Where is LangSmith data stored?

For Cloud SaaS, traces are stored in GCP us-central-1 or GCP europe-west4, depending on your plan. Learn more.

Will LangSmith add latency to my application?

No, LangSmith does not add any latency to your application. In the LangSmith SDK, there’s a callback handler that sends traces to a LangSmith trace collector which runs as an async, distributed process. Additionally, if LangSmith experiences an incident, your application performance will not be disrupted.

Will you train on the data that I send LangSmith?

We will not train on your data, and you own all rights to your data. See LangSmith Terms of Service for more information.

How much does LangSmith cost?

See our pricing page for more information, and find a plan that works for you.

Ready to start shipping 
reliable GenAI apps faster?

Get started with LangChain, LangSmith, and LangGraph to enhance your LLM app development, from prototype to production.