Databricks Langchain Integrations Python APIο
- Setup:
Install
databricks-langchain
.pip install -U databricks-langchain
If you are outside Databricks, set the Databricks workspace hostname and personal access token to environment variables:
export DATABRICKS_HOSTNAME="https://your-databricks-workspace" export DATABRICKS_TOKEN="your-personal-access-token"
Re-exported Unity Catalog Utilities
This module re-exports selected utilities from the Unity Catalog open source package.
Available aliases:
Refer to the Unity Catalog documentation for more information.
- class databricks_langchain.ChatDatabricksο
Bases:
BaseChatModel
Databricks chat model integration.
Instantiate:
from databricks_langchain import ChatDatabricks llm = ChatDatabricks( model="databricks-meta-llama-3-1-405b-instruct", temperature=0, max_tokens=500, )
- For Responses API endpoints like a ResponsesAgent, set
use_responses_api=True
:
Invoke:
messages = [ ("system", "You are a helpful translator. Translate the user sentence to French."), ("human", "I love programming."), ] llm.invoke(messages)
AIMessage( content="J'adore la programmation.", response_metadata={"prompt_tokens": 32, "completion_tokens": 9, "total_tokens": 41}, id="run-64eebbdd-88a8-4a25-b508-21e9a5f146c5-0", )
Stream:
for chunk in llm.stream(messages): print(chunk)
content='J' id='run-609b8f47-e580-4691-9ee4-e2109f53155e' content="'" id='run-609b8f47-e580-4691-9ee4-e2109f53155e' content='ad' id='run-609b8f47-e580-4691-9ee4-e2109f53155e' content='ore' id='run-609b8f47-e580-4691-9ee4-e2109f53155e' content=' la' id='run-609b8f47-e580-4691-9ee4-e2109f53155e' content=' programm' id='run-609b8f47-e580-4691-9ee4-e2109f53155e' content='ation' id='run-609b8f47-e580-4691-9ee4-e2109f53155e' content='.' id='run-609b8f47-e580-4691-9ee4-e2109f53155e' content='' response_metadata={'finish_reason': 'stop'} id='run-609b8f47-e580-4691-9ee4-e2109f53155e'
stream = llm.stream(messages) full = next(stream) for chunk in stream: full += chunk full
AIMessageChunk( content="J'adore la programmation.", response_metadata={"finish_reason": "stop"}, id="run-4cef851f-6223-424f-ad26-4a54e5852aa5", )
To get token usage returned when streaming, pass the
stream_usage
kwarg:stream = llm.stream(messages, stream_usage=True) next(stream).usage_metadata
{"input_tokens": 28, "output_tokens": 5, "total_tokens": 33}
Alternatively, setting
stream_usage
when instantiating the model can be useful when incorporatingChatDatabricks
into LCEL chainsβ or when using methods like.with_structured_output
, which generate chains under the hood.llm = ChatDatabricks(model="databricks-meta-llama-3-1-405b-instruct", stream_usage=True) structured_llm = llm.with_structured_output(...)
Async:
await llm.ainvoke(messages) # stream: # async for chunk in llm.astream(messages) # batch: # await llm.abatch([messages])
AIMessage( content="J'adore la programmation.", response_metadata={"prompt_tokens": 32, "completion_tokens": 9, "total_tokens": 41}, id="run-e4bb043e-772b-4e1d-9f98-77ccc00c0271-0", )
Tool calling:
from pydantic import BaseModel, Field class GetWeather(BaseModel): '''Get the current weather in a given location''' location: str = Field(..., description="The city and state, e.g. San Francisco, CA") class GetPopulation(BaseModel): '''Get the current population in a given location''' location: str = Field(..., description="The city and state, e.g. San Francisco, CA") llm_with_tools = llm.bind_tools([GetWeather, GetPopulation]) ai_msg = llm_with_tools.invoke( "Which city is hotter today and which is bigger: LA or NY?" ) ai_msg.tool_calls
[ { "name": "GetWeather", "args": {"location": "Los Angeles, CA"}, "id": "call_ea0a6004-8e64-4ae8-a192-a40e295bfa24", "type": "tool_call", } ]
To use tool calls, your model endpoint must support
tools
parameter. See [Function calling on Databricks](https://python.langchain.com/docs/integrations/chat/databricks/#function-calling-on-databricks) for more information.- param extra_params: Dict[str, Any] | None = Noneο
Whether to include usage metadata in streaming output. If True, additional message chunks will be generated during the stream including usage metadata.
- param model: str [Required] (alias 'endpoint')ο
Name of Databricks Model Serving endpoint to query.
- param temperature: float | None = Noneο
Sampling temperature. Higher values make the model more creative.
- param use_responses_api: bool = Falseο
Whether to use the Responses API to format inputs and outputs.
- bind_tools(tools: Sequence[Dict[str, Any] | Type[BaseModel] | Callable | BaseTool], *, tool_choice: dict | str | Literal['auto', 'none', 'required', 'any'] | bool | None = None, **kwargs: Any) Runnable[PromptValue | str | Sequence[BaseMessage | list[str] | tuple[str, str] | str | dict[str, Any]], BaseMessage] ο
Bind tool-like objects to this chat model.
Assumes model is compatible with OpenAI tool-calling API.
- Parameters:
tools β A list of tool definitions to bind to this chat model. Can be a dictionary, pydantic model, callable, or BaseTool. Pydantic models, callables, and BaseTools will be automatically converted to their schema dictionary representation.
tool_choice β
Which tool to require the model to call. Options are:
name of the tool (str): Calls corresponding tool.
βautoβ: Automatically selects a tool (including no tool).
βnoneβ: Model does not generate any tool calls and instead must generate a standard assistant message.
βrequiredβ: The model picks the most relevant tool in tools and must generate a tool call or a dictionary of the form:
{ "type": "function", "function": { "name": "<<tool_name>>" } }
**kwargs β Any additional parameters to pass to the Runnable constructor.
- with_structured_output(schema: Dict | Type | None = None, *, method: Literal['function_calling', 'json_mode', 'json_schema'] = 'function_calling', include_raw: bool = False, **kwargs: Any) Runnable[PromptValue | str | Sequence[BaseMessage | list[str] | tuple[str, str] | str | dict[str, Any]], Dict | BaseModel] ο
Model wrapper that returns outputs formatted to match the given schema.
Assumes model is compatible with OpenAI tool-calling API.
- Parameters:
schema β The output schema as a dict or a Pydantic class. If a Pydantic class then the model output will be an object of that class. If a dict then the model output will be a dict. With a Pydantic class the returned attributes will be validated, whereas with a dict they will not be. If
method
is βfunction_callingβ andschema
is a dict, then the dict must match the OpenAI function-calling spec or be a valid JSON schema with top level βtitleβ and βdescriptionβ keys specified.method β The method for steering model generation, either βfunction_callingβ or βjson_modeβ. If βfunction_callingβ then the schema will be converted to an OpenAI function and the returned model will make use of the function-calling API. If βjson_modeβ then OpenAIβs JSON mode will be used. Note that if using βjson_modeβ then you must include instructions for formatting the output into the desired schema into the model call.
include_raw β If False then only the parsed structured output is returned. If an error occurs during model output parsing it will be raised. If True then both the raw model response (a BaseMessage) and the parsed model response will be returned. If an error occurs during output parsing it will be caught and returned as well. The final output is always a dict with keys βrawβ, βparsedβ, and βparsing_errorβ.
- Returns:
If
include_raw
is False andschema
is a Pydantic class, Runnable outputs an instance ofschema
(i.e., a Pydantic object).Otherwise, if
include_raw
is False then Runnable outputs a dict.- If
include_raw
is True, then Runnable outputs a dict with keys: "raw"
: BaseMessage"parsed"
: None if there was a parsing error, otherwise the type depends on theschema
as described above."parsing_error"
: Optional[BaseException]
- If
- Return type:
A Runnable that takes any ChatModel input and returns as output
Examples:
Function-calling, Pydantic schema (method=βfunction_callingβ, include_raw=False)
from databricks_langchain import ChatDatabricks from pydantic import BaseModel class AnswerWithJustification(BaseModel): '''An answer to the user question along with justification for the answer.''' answer: str justification: str llm = ChatDatabricks(model="databricks-meta-llama-3-1-70b-instruct") structured_llm = llm.with_structured_output(AnswerWithJustification) structured_llm.invoke("What weighs more a pound of bricks or a pound of feathers") # -> AnswerWithJustification( # answer='They weigh the same', # justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.' # )
Function-calling, Pydantic schema (method=βfunction_callingβ, include_raw=True):
from databricks_langchain import ChatDatabricks from pydantic import BaseModel class AnswerWithJustification(BaseModel): '''An answer to the user question along with justification for the answer.''' answer: str justification: str llm = ChatDatabricks(model="databricks-meta-llama-3-1-70b-instruct") structured_llm = llm.with_structured_output(AnswerWithJustification, include_raw=True) structured_llm.invoke("What weighs more a pound of bricks or a pound of feathers") # -> { # 'raw': AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_Ao02pnFYXD6GN1yzc0uXPsvF', 'function': {'arguments': '{"answer":"They weigh the same.","justification":"Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ."}', 'name': 'AnswerWithJustification'}, 'type': 'function'}]}), # 'parsed': AnswerWithJustification(answer='They weigh the same.', justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'), # 'parsing_error': None # }
Function-calling, dict schema (method=βfunction_callingβ, include_raw=False):
from databricks_langchain import ChatDatabricks from langchain_core.utils.function_calling import convert_to_openai_tool from pydantic import BaseModel class AnswerWithJustification(BaseModel): '''An answer to the user question along with justification for the answer.''' answer: str justification: str dict_schema = convert_to_openai_tool(AnswerWithJustification) llm = ChatDatabricks(model="databricks-meta-llama-3-1-70b-instruct") structured_llm = llm.with_structured_output(dict_schema) structured_llm.invoke("What weighs more a pound of bricks or a pound of feathers") # -> { # 'answer': 'They weigh the same', # 'justification': 'Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume and density of the two substances differ.' # }
JSON mode, Pydantic schema (method=βjson_modeβ, include_raw=True):
from databricks_langchain import ChatDatabricks from pydantic import BaseModel class AnswerWithJustification(BaseModel): answer: str justification: str llm = ChatDatabricks(model="databricks-meta-llama-3-1-70b-instruct") structured_llm = llm.with_structured_output( AnswerWithJustification, method="json_mode", include_raw=True ) structured_llm.invoke( "Answer the following question. " "Make sure to return a JSON blob with keys 'answer' and 'justification'." "What's heavier a pound of bricks or a pound of feathers?" ) # -> { # 'raw': AIMessage(content='{ "answer": "They are both the same weight.", "justification": "Both a pound of bricks and a pound of feathers weigh one pound. The difference lies in the volume and density of the materials, not the weight." }'), # 'parsed': AnswerWithJustification(answer='They are both the same weight.', justification='Both a pound of bricks and a pound of feathers weigh one pound. The difference lies in the volume and density of the materials, not the weight.'), # 'parsing_error': None # }
JSON mode, no schema (schema=None, method=βjson_modeβ, include_raw=True):
structured_llm = llm.with_structured_output(method="json_mode", include_raw=True) structured_llm.invoke( "Answer the following question. " "Make sure to return a JSON blob with keys 'answer' and 'justification'." "What's heavier a pound of bricks or a pound of feathers?" ) # -> { # 'raw': AIMessage(content='{ "answer": "They are both the same weight.", "justification": "Both a pound of bricks and a pound of feathers weigh one pound. The difference lies in the volume and density of the materials, not the weight." }'), # 'parsed': { # 'answer': 'They are both the same weight.', # 'justification': 'Both a pound of bricks and a pound of feathers weigh one pound. The difference lies in the volume and density of the materials, not the weight.' # }, # 'parsing_error': None # }
- For Responses API endpoints like a ResponsesAgent, set
- class databricks_langchain.DatabricksEmbeddingsο
Bases:
Embeddings
,BaseModel
Databricks embedding model integration.
Instantiate:
from databricks_langchain import DatabricksEmbeddings embed = DatabricksEmbeddings( endpoint="databricks-bge-large-en", )
Embed single text:
input_text = "The meaning of life is 42" embed.embed_query(input_text)
[0.01605224609375, -0.0298309326171875, ...]
- class databricks_langchain.DatabricksVectorSearch(index_name: str, endpoint: str | None = None, embedding: Embeddings | None = None, text_column: str | None = None, doc_uri: str | None = None, primary_key: str | None = None, columns: List[str] | None = None, workspace_client: WorkspaceClient | None = None, client_args: Dict[str, Any] | None = None, include_score: bool = False)ο
Bases:
VectorStore
Databricks vector store integration.
- Parameters:
index_name β The name of the index to use. Format: βcatalog.schema.indexβ.
endpoint β
The name of the Databricks Vector Search
endpoint
. If not specified, the endpoint name is automatically inferred based on the index name.Note
If you are using
databricks-vectorsearch
version < 0.35, theendpoint
parameter is required when initializing the vector store.vector_store = DatabricksVectorSearch( endpoint="<your-endpoint-name>", index_name="<your-index-name>", ... )
embedding β The embedding model. Required for direct-access index or delta-sync index with self-managed embeddings.
text_column β The name of the text column to use for the embeddings. Required for direct-access index or delta-sync index with self-managed embeddings. Make sure the text column specified is in the index.
columns β The list of column names to get when doing the search. Defaults to
[primary_key, text_column]
.client_args β Additional arguments to pass to the VectorSearchClient. Allows you to pass in values like
service_principal_client_id
andservice_principal_client_secret
to allow for service principal authentication instead of personal access token authentication.
Instantiate:
DatabricksVectorSearch
supports two types of indexes:Delta Sync Index automatically syncs with a source Delta Table, automatically and incrementally updating the index as the underlying data in the Delta Table changes.
Direct Vector Access Index supports direct read and write of vectors and metadata. The user is responsible for updating this table using the REST API or the Python SDK.
Also for delta-sync index, you can choose to use Databricks-managed embeddings or self-managed embeddings (via LangChain embeddings classes).
If you are using a delta-sync index with Databricks-managed embeddings:
from databricks_langchain.vectorstores import DatabricksVectorSearch vector_store = DatabricksVectorSearch(index_name="<your-index-name>")
If you are using a direct-access index or a delta-sync index with self-managed embeddings, you also need to provide the embedding model and text column in your source table to use for the embeddings:
from langchain_openai import OpenAIEmbeddings vector_store = DatabricksVectorSearch( index_name="<your-index-name>", embedding=OpenAIEmbeddings(), text_column="document_content", )
Add Documents:
from langchain_core.documents import Document document_1 = Document(page_content="foo", metadata={"baz": "bar"}) document_2 = Document(page_content="thud", metadata={"bar": "baz"}) document_3 = Document(page_content="i will be deleted :(") documents = [document_1, document_2, document_3] ids = ["1", "2", "3"] vector_store.add_documents(documents=documents, ids=ids)
Delete Documents:
vector_store.delete(ids=["3"])
Note
The
delete
method is only supported for direct-access index.Search:
results = vector_store.similarity_search(query="thud", k=1) for doc in results: print(f"* {doc.page_content} [{doc.metadata}]")
*thud[{"id": "2"}]
Search with filter:
results = vector_store.similarity_search(query="thud", k=1, filter={"bar": "baz"}) for doc in results: print(f"* {doc.page_content} [{doc.metadata}]")
*thud[{"id": "2"}]
Search with score:
results = vector_store.similarity_search_with_score(query="qux", k=1) for doc, score in results: print(f"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]")
* [SIM=0.748804] foo [{'id': '1'}]
Async:
# add documents await vector_store.aadd_documents(documents=documents, ids=ids) # delete documents await vector_store.adelete(ids=["3"]) # search results = vector_store.asimilarity_search(query="thud", k=1) # search with score results = await vector_store.asimilarity_search_with_score(query="qux", k=1) for doc, score in results: print(f"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]")
* [SIM=0.748807] foo [{'id': '1'}]
Use as Retriever:
retriever = vector_store.as_retriever( search_type="mmr", search_kwargs={"k": 1, "fetch_k": 2, "lambda_mult": 0.5}, ) retriever.invoke("thud")
[Document(metadata={"id": "2"}, page_content="thud")]
- property embeddings: Embeddings | Noneο
Access the query embedding object if available.
- classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: List[Dict] | None = None, **kwargs: Any) VST ο
Return VectorStore initialized from texts and embeddings.
- Parameters:
texts β Texts to add to the vectorstore.
embedding β Embedding function to use.
metadatas β Optional list of metadatas associated with the texts. Default is None.
ids β Optional list of IDs associated with the texts.
kwargs β Additional keyword arguments.
- Returns:
VectorStore initialized from texts and embeddings.
- Return type:
VectorStore
- add_texts(texts: Iterable[str], metadatas: List[Dict] | None = None, ids: List[Any] | None = None, **kwargs: Any) List[str] ο
Add texts to the index.
Note
This method is only supported for a direct-access index.
- Parameters:
texts β List of texts to add.
metadatas β List of metadata for each text. Defaults to None.
ids β List of ids for each text. Defaults to None. If not provided, a random uuid will be generated for each text.
- Returns:
List of ids from adding the texts into the index.
- async aadd_texts(texts: Iterable[str], metadatas: List[dict] | None = None, **kwargs: Any) List[str] ο
Async run more texts through the embeddings and add to the vectorstore.
- Parameters:
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts. Default is None.
ids β Optional list
**kwargs β vectorstore specific parameters.
- Returns:
List of ids from adding the texts into the vectorstore.
- Raises:
ValueError β If the number of metadatas does not match the number of texts.
ValueError β If the number of ids does not match the number of texts.
- delete(ids: List[Any] | None = None, **kwargs: Any) bool | None ο
Delete documents from the index.
Note
This method is only supported for a direct-access index.
- Parameters:
ids β List of ids of documents to delete.
- Returns:
True if successful.
- similarity_search(query: str, k: int = 4, filter: Dict[str, Any] | None = None, *, query_type: str | None = None, **kwargs: Any) List[Document] ο
Return docs most similar to query.
- Parameters:
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
filter β Filters to apply to the query. Defaults to None.
query_type β The type of this query. Supported values are βANNβ and βHYBRIDβ.
- Returns:
List of Documents most similar to the embedding.
- async asimilarity_search(query: str, k: int = 4, **kwargs: Any) List[Document] ο
Async return docs most similar to query.
- Parameters:
query β Input text.
k β Number of Documents to return. Defaults to 4.
**kwargs β Arguments to pass to the search method.
- Returns:
List of Documents most similar to the query.
- similarity_search_with_score(query: str, k: int = 4, filter: Dict[str, Any] | None = None, *, query_type: str | None = None, **kwargs: Any) List[Tuple[Document, float]] ο
Return docs most similar to query, along with scores.
- Parameters:
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
filter β Filters to apply to the query. Defaults to None.
query_type β The type of this query. Supported values are βANNβ and βHYBRIDβ.
- Returns:
List of Documents most similar to the embedding and score for each.
- async asimilarity_search_with_score(*args: Any, **kwargs: Any) List[Tuple[Document, float]] ο
Async run similarity search with distance.
- Parameters:
*args β Arguments to pass to the search method.
**kwargs β Arguments to pass to the search method.
- Returns:
List of Tuples of (doc, similarity_score).
- similarity_search_by_vector(embedding: List[float], k: int = 4, filter: Any | None = None, *, query_type: str | None = None, query: str | None = None, **kwargs: Any) List[Document] ο
Return docs most similar to embedding vector.
- Parameters:
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
filter β Filters to apply to the query. Defaults to None.
query_type β The type of this query. Supported values are βANNβ and βHYBRIDβ.
- Returns:
List of Documents most similar to the embedding.
- async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) List[Document] ο
Async return docs most similar to embedding vector.
- Parameters:
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
**kwargs β Arguments to pass to the search method.
- Returns:
List of Documents most similar to the query vector.
- similarity_search_by_vector_with_score(embedding: List[float], k: int = 4, filter: Any | None = None, *, query_type: str | None = None, query: str | None = None, **kwargs: Any) List[Tuple[Document, float]] ο
Return docs most similar to embedding vector, along with scores.
Note
This method is not supported for index with Databricks-managed embeddings.
- Parameters:
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
filter β Filters to apply to the query. Defaults to None.
query_type β The type of this query. Supported values are βANNβ and βHYBRIDβ.
- Returns:
List of Documents most similar to the embedding and score for each.
- max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Dict[str, Any] | None = None, *, query_type: str | None = None, **kwargs: Any) List[Document] ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.
Note
This method is not supported for index with Databricks-managed embeddings.
- Parameters:
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult β Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.
filter β Filters to apply to the query. Defaults to None.
query_type β The type of this query. Supported values are βANNβ and βHYBRIDβ.
- Returns:
List of Documents selected by maximal marginal relevance.
- async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) List[Document] ο
Async return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.
- Parameters:
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm. Default is 20.
lambda_mult β Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.
**kwargs β Arguments to pass to the search method.
- Returns:
List of Documents selected by maximal marginal relevance.
- max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Any | None = None, *, query_type: str | None = None, **kwargs: Any) List[Document] ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.
Note
This method is not supported for index with Databricks-managed embeddings.
- Parameters:
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult β Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.
filter β Filters to apply to the query. Defaults to None.
query_type β The type of this query. Supported values are βANNβ and βHYBRIDβ.
- Returns:
List of Documents selected by maximal marginal relevance.
- async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) List[Document] ο
Async return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.
- Parameters:
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm. Default is 20.
lambda_mult β Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.
**kwargs β Arguments to pass to the search method.
- Returns:
List of Documents selected by maximal marginal relevance.
- databricks_langchain.GenieAgent(genie_space_id, genie_agent_name: str = 'Genie', description: str = '', include_context: bool = False, client: WorkspaceClient | None = None)ο
Create a genie agent that can be used to query the API. If a description is not provided, the description of the genie space will be used.
- class databricks_langchain.VectorSearchRetrieverToolο
Bases:
BaseTool
,VectorSearchRetrieverToolMixin
A utility class to create a vector search-based retrieval tool for querying indexed embeddings. This class integrates with Databricks Vector Search and provides a convenient interface for building a retriever tool for agents.
- param args_schema: Type[BaseModel] = <class 'databricks_ai_bridge.vector_search_retriever_tool.VectorSearchRetrieverToolInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
Args schema should be either:
A subclass of pydantic.BaseModel.
or - A subclass of pydantic.v1.BaseModel if accessing v1 namespace in pydantic 2 or - a JSON schema dict
- param description: str = ''ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
The description of the tool
- param embedding: Embeddings | None = Noneο
Embedding model for self-managed embeddings.
- class databricks_langchain.UCFunctionToolkit(*, function_names: List[str] = None, tools_dict: Dict[str, UnityCatalogTool] = None, client: BaseFunctionClient | None = None, filter_accessible_functions: bool = False)ο
Bases:
BaseModel
- tools_dict: Dict[str, UnityCatalogTool]ο
- static uc_function_to_langchain_tool(*, function_name: str, client: BaseFunctionClient | None = None, filter_accessible_functions: bool = False) UnityCatalogTool | None ο
Convert a UC function to Langchain StructuredTool
- Parameters:
function_name β The full name of the function in the form of βcatalog.schema.functionβ
client β The client for managing functions, must be an instance of BaseFunctionClient
- property tools: List[UnityCatalogTool]ο
- class databricks_langchain.UnityCatalogToolο
Bases:
StructuredTool
- class databricks_langchain.DatabricksFunctionClient(client: WorkspaceClient | None = None, *, profile: str | None = None, execution_mode: str = 'serverless', **kwargs: Any)ο
Bases:
BaseFunctionClient
Databricks UC function calling client
- set_spark_session()ο
Initialize the spark session with serverless compute if not already active.
- stop_spark_session()ο
- initialize_spark_session()ο
Initialize the spark session with serverless compute. This method is called when the spark session is not active.
- refresh_client_and_session()ο
Refreshes the databricks client and spark session if the session_id has been invalidated due to expiration of temporary credentials. If the client is running within an interactive Databricks notebook environment, the spark session is not terminated.
- create_function(*, sql_function_body: str | None = None) FunctionInfo ο
Create a UC function with the given sql body or function info.
- Note:
databricks-connect
is required to use this function, make sure its version is 15.1.0 or above to use serverless compute.
- Parameters:
sql_function_body β The sql body of the function. Defaults to None. It should follow the syntax of CREATE FUNCTION statement in Databricks. Ref: https://docs.databricks.com/en/sql/language-manual/sql-ref-syntax-ddl-create-sql-function.html#syntax
- Returns:
The created function info.
- Return type:
FunctionInfo
- Note:
- create_python_function(*, func: Callable[[...], Any], catalog: str, schema: str, replace: bool = False, dependencies: list[str] | None = None, environment_version: str = 'None') FunctionInfo ο
Create a Unity Catalog (UC) function directly from a Python function.
This API allows you to convert a Python function into a Unity Catalog User-Defined Function (UDF). It automates the creation of UC functions while ensuring that the Python function meets certain criteria and adheres to best practices.
Requirements:
- Type Annotations:
The Python function must use argument and return type annotations. These annotations are used
to generate the SQL signature of the UC function. - Supported Python types and their corresponding UC types are as follows:
Python Type | Unity Catalog Type ||----------------------|βββββββββ| |
int
|LONG
| |float
|DOUBLE
| |str
|STRING
| |bool
|Boolean Operations β and, or, not
| |Decimal
|DECIMAL
| |datetime.date
|DATE
| |datetime.timedelta
|INTERVAL DAY TO SECOND
| |datetime.datetime
|TIMESTAMP
| |list
|ARRAY
| |tuple
|ARRAY
| |dict
|MAP
| |bytes
|Binary arithmetic operations
|Example of a valid function:
```python def my_function(a: int, b: str) -> float:
return a + len(b)
Invalid function (missing type annotations):
```python def my_function(a, b):
return a + len(b)
``` Attempting to create a UC function from a function without type hints will raise an error, as the system relies on type hints to generate the UC functionβs signature.
uniform (Union types are not permitted). For example:
```python def my_function(a: List[int], b: Dict[str, float]) -> List[str]:
return [str(x) for x in a]
var args and kwargs are not supported. All arguments must be explicitly defined in the function signature.
- Google Docstring Guidelines:
It is required to include detailed Python docstrings in your function to provide additional context.
The docstrings will be used to auto-generate parameter descriptions and a function-level comment.
A function description must be provided at the beginning of the docstring (within the triple quotes)
to describe the functionβs purpose. This description will be used as the function-level comment in the UC function. The description must be included in the first portion of the docstring prior to any argument descriptions.
Parameter descriptions are optional but recommended. If provided, they should be included in the
Google-style docstring. The parameter descriptions will be used to auto-generate detailed descriptions for each parameter in the UC function. The additional context provided by these argument descriptions can be useful for agent applications to understand context of the arguments and their purpose.
Only Google-style docstrings are supported for this auto-generation. For example:
```python def my_function(a: int, b: str) -> float:
βββ Adds the length of a string to an integer.
- Args:
a (int): The integer to add to. b (str): The string whose length will be added.
- Returns:
float: The sum of the integer and the string length.
βββ return a + len(b)
``` - If docstrings do not conform to Google-style for specifying arguments descriptions, parameter descriptions
will default to
"Parameter <name>"
, and no further information will be provided in the function comment for the given parameter.For examples of Google docstring guidelines, see [this link](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html)
- External Dependencies:
Unity Catalog UDFs are limited to Python standard libraries and Databricks-provided libraries. If your
function relies on unsupported external dependencies, the created UC function may fail at runtime. - It is strongly recommended to test the created function by executing it before integrating it into GenAI or other tools.
Function Metadata: - Docstrings (if provided and Google-style) will automatically be included as detailed descriptions for function parameters as well as for the function itself, enhancing the discoverability of the utility of your UC function.
Example: ```python def example_function(x: int, y: int) -> float:
βββ Multiplies an integer by the length of a string.
- Args:
x (int): The number to be multiplied. y (int): A string whose length will be used for multiplication.
- Returns:
float: The product of the integer and the string length.
βββ return x * len(y)
- client.create_python_function(
func=example_function, catalog=βmy_catalogβ, schema=βmy_schemaβ
)ο
Overwriting a function: - If a function with the same name already exists in the specified catalog and schema, the function will not be created by default. To overwrite the existing function, set the
replace
parameter toTrue
.- param func:
The Python function to convert into a UDF.
- param catalog:
The catalog name in which to create the function.
- param schema:
The schema name in which to create the function.
- param replace:
Whether to replace the function if it already exists. Defaults to False.
- param dependencies:
A list of external dependencies required by the function. Defaults to an empty list. Note that the
dependencies
parameter is not supported in all runtimes. Ensure that you are using a runtime that supports environment and dependency declaration prior to creating a function that defines dependencies. Standard PyPI package declarations are supported (i.e.,requests>=2.25.1
).- param environment_version:
The version of the environment in which the function will be executed. Defaults to βNoneβ. Note that the
environment_version
parameter is not supported in all runtimes. Ensure that you are using a runtime that supports environment and dependency declaration prior to creating a function that declares an environment verison.- returns:
Metadata about the created function, including its name and signature.
- rtype:
FunctionInfo
- create_wrapped_function(*, primary_func: Callable[[...], Any], functions: list[Callable[[...], Any]], catalog: str, schema: str, replace=False, dependencies: list[str] | None = None, environment_version: str = 'None') FunctionInfo ο
Create a wrapped function comprised of a
primary_func
function and in-lined wrappedFunctions
within theprimary_func
body.- Note:
databricks-connect
is required to use this function, make sure its version is 15.1.0 or above to use serverless compute.
- Parameters:
primary_func β The primary function to be wrapped.
functions β A list of functions to be wrapped inline within the body of
primary_func
.catalog β The catalog name.
schema β The schema name.
replace β Whether to replace the function if it already exists. Defaults to False.
dependencies β A list of external dependencies required by the function. Defaults to an empty list. Note that the
dependencies
parameter is not supported in all runtimes. Ensure that you are using a runtime that supports environment and dependency declaration prior to creating a function that defines dependencies. Standard PyPI package declarations are supported (i.e.,requests>=2.25.1
).environment_version β The version of the environment in which the function will be executed. Defaults to βNoneβ. Note that the
environment_version
parameter is not supported in all runtimes. Ensure that you are using a runtime that supports environment and dependency declaration prior to creating a function that declares an environment verison.
- Returns:
Metadata about the created function, including its name and signature.
- Return type:
FunctionInfo
- Note:
- get_function(function_name: str, **kwargs: Any) FunctionInfo ο
Get a function by its name.
- Parameters:
function_name β The name of the function to get.
kwargs β additional key-value pairs to include when getting the function.
are (Allowed keys for retrieving functions)
include_browse (-) β bool (default to None) Whether to include functions in the response for which the principal can only access selective metadata for.
Note
The function name shouldnβt be *, to get all functions in a catalog and schema, please use list_functions API instead.
- Returns:
The function info.
- Return type:
FunctionInfo
- list_functions(catalog: str, schema: str, max_results: int | None = None, page_token: str | None = None, include_browse: bool | None = None) PagedList[FunctionInfo] ο
List functions in a catalog and schema.
- Parameters:
catalog β The catalog name.
schema β The schema name.
max_results β The maximum number of functions to return. Defaults to None.
page_token β The token for the next page. Defaults to None.
include_browse β Whether to include functions in the response for which the
None. (principal can only access selective metadata for. Defaults to)
- Returns:
The paginated list of function infos.
- Return type:
PageList[FunctionInfo]
- execute_function(function_name: str, parameters: Dict[str, Any] | None = None, **kwargs: Any) FunctionExecutionResult ο
Execute a UC function by name with the given parameters.
- Parameters:
function_name β The name of the function to execute.
parameters β The parameters to pass to the function. Defaults to None.
kwargs β
additional key-value pairs to include when executing the function. Allowed keys for retrieving functions are: - include_browse: bool (default to False)
Whether to include functions in the response for which the principal can only access selective metadata for.
Allowed keys for executing functions are: - wait_timeout: str (default to
30s
)The time in seconds the call will wait for the statementβs result set as
Ns
, whereN
can be set to 0 or to a value between 5 and 50.When set to
0s
, the statement will execute in asynchronous mode and the call will not wait for the execution to finish. In this case, the call returns directly withPENDING
state and a statement ID which can be used for polling with :method:statementexecution/getStatement.When set between 5 and 50 seconds, the call will behave synchronously up to this timeout and wait for the statement execution to finish. If the execution finishes within this time, the call returns immediately with a manifest and result data (or a
FAILED
state in case of an execution error). If the statement takes longer to execute,on_wait_timeout
determines what should happen after the timeout is reached.- row_limit: int (default to 100)
Applies the given row limit to the statementβs result set, but unlike the
LIMIT
clause in SQL, it also sets thetruncated
field in the response to indicate whether the result was trimmed due to the limit or not.
- byte_limit: int (default to 1048576 = 1MB)
Applies the given byte limit to the statementβs result size. Byte counts are based on internal data representations and might not match the final size in the requested
format
. If the result was truncated due to the byte limit, thentruncated
in the response is set totrue
. When usingEXTERNAL_LINKS
disposition, a defaultbyte_limit
of 100 GiB is applied ifbyte_limit
is not explcitly set.
- Returns:
The result of executing the function.
- Return type:
FunctionExecutionResult
- delete_function(function_name: str, force: bool | None = None) None ο
Delete a function by its full name.
- Parameters:
function_name β The full name of the function to delete. It should be in the format of βcatalog.schema.function_nameβ.
force β Force deletion even if the function is not empty. This parameter is used by underlying databricks workspace client when deleting a function. If it is None then the parameter is not included in the request. Defaults to None.
- to_dict()ο
Store the client configuration in a dictionary. Sensitive information should be excluded.
- get_function_source(function_name: str) str ο
Returns the Python callable definition as a string for an EXTERNAL Python function that is stored within Unity Catalog. This function can only parse and extract the full callable definition for Python functions and cannot be used on SQL or TABLE functions.
- Parameters:
function_name β The name of the function to retrieve the Python callable definition for.
- Returns:
The Python callable definition as a string.
- Return type:
- get_function_as_callable(function_name: str, register_function: bool = True, namespace: dict[str, Any] | None = None) Callable[[...], Any] ο
Returns the Python callable for an EXTERNAL Python function that is stored within Unity Catalog. This function can only parse and extract the full callable definition for Python functions and cannot be used on SQL or TABLE functions.
- Parameters:
function_name β The name of the function to retrieve the Python callable for.
register_function β Whether to register the function in the namespace. Defaults to True.
namespace β The namespace to register the function in. Defaults to None (global)
- Returns:
The Python callable for the function.
- Return type:
Callable[β¦, Any]