Databricks Langchain Integrations Python API

Setup:

Install databricks-langchain.

pip install -U databricks-langchain

If you are outside Databricks, set the Databricks workspace hostname and personal access token to environment variables:

export DATABRICKS_HOSTNAME="https://your-databricks-workspace"
export DATABRICKS_TOKEN="your-personal-access-token"

Re-exported Unity Catalog Utilities

This module re-exports selected utilities from the Unity Catalog open source package.

Available aliases:

Refer to the Unity Catalog documentation for more information.

class databricks_langchain.ChatDatabricks

Bases: BaseChatModel

Databricks chat model integration.

Instantiate:

from databricks_langchain import ChatDatabricks

llm = ChatDatabricks(
    model="databricks-meta-llama-3-1-405b-instruct",
    temperature=0,
    max_tokens=500,
)
For Responses API endpoints like a ResponsesAgent, set use_responses_api=True:

Invoke:

messages = [
    ("system", "You are a helpful translator. Translate the user sentence to French."),
    ("human", "I love programming."),
]
llm.invoke(messages)
AIMessage(
    content="J'adore la programmation.",
    response_metadata={"prompt_tokens": 32, "completion_tokens": 9, "total_tokens": 41},
    id="run-64eebbdd-88a8-4a25-b508-21e9a5f146c5-0",
)

Stream:

for chunk in llm.stream(messages):
    print(chunk)
content='J' id='run-609b8f47-e580-4691-9ee4-e2109f53155e'
content="'" id='run-609b8f47-e580-4691-9ee4-e2109f53155e'
content='ad' id='run-609b8f47-e580-4691-9ee4-e2109f53155e'
content='ore' id='run-609b8f47-e580-4691-9ee4-e2109f53155e'
content=' la' id='run-609b8f47-e580-4691-9ee4-e2109f53155e'
content=' programm' id='run-609b8f47-e580-4691-9ee4-e2109f53155e'
content='ation' id='run-609b8f47-e580-4691-9ee4-e2109f53155e'
content='.' id='run-609b8f47-e580-4691-9ee4-e2109f53155e'
content='' response_metadata={'finish_reason': 'stop'} id='run-609b8f47-e580-4691-9ee4-e2109f53155e'
stream = llm.stream(messages)
full = next(stream)
for chunk in stream:
    full += chunk
full
AIMessageChunk(
    content="J'adore la programmation.",
    response_metadata={"finish_reason": "stop"},
    id="run-4cef851f-6223-424f-ad26-4a54e5852aa5",
)

To get token usage returned when streaming, pass the stream_usage kwarg:

stream = llm.stream(messages, stream_usage=True)
next(stream).usage_metadata
{"input_tokens": 28, "output_tokens": 5, "total_tokens": 33}

Alternatively, setting stream_usage when instantiating the model can be useful when incorporating ChatDatabricks into LCEL chains– or when using methods like .with_structured_output, which generate chains under the hood.

llm = ChatDatabricks(model="databricks-meta-llama-3-1-405b-instruct", stream_usage=True)
structured_llm = llm.with_structured_output(...)

Async:

await llm.ainvoke(messages)

# stream:
# async for chunk in llm.astream(messages)

# batch:
# await llm.abatch([messages])
AIMessage(
    content="J'adore la programmation.",
    response_metadata={"prompt_tokens": 32, "completion_tokens": 9, "total_tokens": 41},
    id="run-e4bb043e-772b-4e1d-9f98-77ccc00c0271-0",
)

Tool calling:

from pydantic import BaseModel, Field


class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")


class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")


llm_with_tools = llm.bind_tools([GetWeather, GetPopulation])
ai_msg = llm_with_tools.invoke(
    "Which city is hotter today and which is bigger: LA or NY?"
)
ai_msg.tool_calls
[
    {
        "name": "GetWeather",
        "args": {"location": "Los Angeles, CA"},
        "id": "call_ea0a6004-8e64-4ae8-a192-a40e295bfa24",
        "type": "tool_call",
    }
]

To use tool calls, your model endpoint must support tools parameter. See [Function calling on Databricks](https://python.langchain.com/docs/integrations/chat/databricks/#function-calling-on-databricks) for more information.

param extra_params: Dict[str, Any] | None = None

Whether to include usage metadata in streaming output. If True, additional message chunks will be generated during the stream including usage metadata.

param max_tokens: int | None = None

The maximum number of tokens to generate.

param model: str [Required] (alias 'endpoint')

Name of Databricks Model Serving endpoint to query.

param n: int = 1

The number of completion choices to generate.

param stop: List[str] | None = None

List of strings to stop generation at.

param stream_usage: bool = False

Any extra parameters to pass to the endpoint.

param target_uri: str = 'databricks'

The target URI to use. Defaults to databricks.

param temperature: float | None = None

Sampling temperature. Higher values make the model more creative.

param use_responses_api: bool = False

Whether to use the Responses API to format inputs and outputs.

bind_tools(tools: Sequence[Dict[str, Any] | Type[BaseModel] | Callable | BaseTool], *, tool_choice: dict | str | Literal['auto', 'none', 'required', 'any'] | bool | None = None, **kwargs: Any) Runnable[PromptValue | str | Sequence[BaseMessage | list[str] | tuple[str, str] | str | dict[str, Any]], BaseMessage]

Bind tool-like objects to this chat model.

Assumes model is compatible with OpenAI tool-calling API.

Parameters:
  • tools – A list of tool definitions to bind to this chat model. Can be a dictionary, pydantic model, callable, or BaseTool. Pydantic models, callables, and BaseTools will be automatically converted to their schema dictionary representation.

  • tool_choice –

    Which tool to require the model to call. Options are:

    • name of the tool (str): Calls corresponding tool.

    • ”auto”: Automatically selects a tool (including no tool).

    • ”none”: Model does not generate any tool calls and instead must generate a standard assistant message.

    • ”required”: The model picks the most relevant tool in tools and must generate a tool call or a dictionary of the form:

      {
          "type": "function",
          "function": {
              "name": "<<tool_name>>"
          }
      }
      

  • **kwargs – Any additional parameters to pass to the Runnable constructor.

with_structured_output(schema: Dict | Type | None = None, *, method: Literal['function_calling', 'json_mode', 'json_schema'] = 'function_calling', include_raw: bool = False, **kwargs: Any) Runnable[PromptValue | str | Sequence[BaseMessage | list[str] | tuple[str, str] | str | dict[str, Any]], Dict | BaseModel]

Model wrapper that returns outputs formatted to match the given schema.

Assumes model is compatible with OpenAI tool-calling API.

Parameters:
  • schema – The output schema as a dict or a Pydantic class. If a Pydantic class then the model output will be an object of that class. If a dict then the model output will be a dict. With a Pydantic class the returned attributes will be validated, whereas with a dict they will not be. If method is β€œfunction_calling” and schema is a dict, then the dict must match the OpenAI function-calling spec or be a valid JSON schema with top level β€˜title’ and β€˜description’ keys specified.

  • method – The method for steering model generation, either β€œfunction_calling” or β€œjson_mode”. If β€œfunction_calling” then the schema will be converted to an OpenAI function and the returned model will make use of the function-calling API. If β€œjson_mode” then OpenAI’s JSON mode will be used. Note that if using β€œjson_mode” then you must include instructions for formatting the output into the desired schema into the model call.

  • include_raw – If False then only the parsed structured output is returned. If an error occurs during model output parsing it will be raised. If True then both the raw model response (a BaseMessage) and the parsed model response will be returned. If an error occurs during output parsing it will be caught and returned as well. The final output is always a dict with keys β€œraw”, β€œparsed”, and β€œparsing_error”.

Returns:

If include_raw is False and schema is a Pydantic class, Runnable outputs an instance of schema (i.e., a Pydantic object).

Otherwise, if include_raw is False then Runnable outputs a dict.

If include_raw is True, then Runnable outputs a dict with keys:
  • "raw": BaseMessage

  • "parsed": None if there was a parsing error, otherwise the type depends on the schema as described above.

  • "parsing_error": Optional[BaseException]

Return type:

A Runnable that takes any ChatModel input and returns as output

Examples:

Function-calling, Pydantic schema (method=”function_calling”, include_raw=False)

from databricks_langchain import ChatDatabricks
from pydantic import BaseModel


class AnswerWithJustification(BaseModel):
    '''An answer to the user question along with justification for the answer.'''

    answer: str
    justification: str


llm = ChatDatabricks(model="databricks-meta-llama-3-1-70b-instruct")
structured_llm = llm.with_structured_output(AnswerWithJustification)

structured_llm.invoke("What weighs more a pound of bricks or a pound of feathers")

# -> AnswerWithJustification(
#     answer='They weigh the same',
#     justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'
# )

Function-calling, Pydantic schema (method=”function_calling”, include_raw=True):

from databricks_langchain import ChatDatabricks
from pydantic import BaseModel


class AnswerWithJustification(BaseModel):
    '''An answer to the user question along with justification for the answer.'''

    answer: str
    justification: str


llm = ChatDatabricks(model="databricks-meta-llama-3-1-70b-instruct")
structured_llm = llm.with_structured_output(AnswerWithJustification, include_raw=True)

structured_llm.invoke("What weighs more a pound of bricks or a pound of feathers")
# -> {
#     'raw': AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_Ao02pnFYXD6GN1yzc0uXPsvF', 'function': {'arguments': '{"answer":"They weigh the same.","justification":"Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ."}', 'name': 'AnswerWithJustification'}, 'type': 'function'}]}),
#     'parsed': AnswerWithJustification(answer='They weigh the same.', justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'),
#     'parsing_error': None
# }

Function-calling, dict schema (method=”function_calling”, include_raw=False):

from databricks_langchain import ChatDatabricks
from langchain_core.utils.function_calling import convert_to_openai_tool
from pydantic import BaseModel


class AnswerWithJustification(BaseModel):
    '''An answer to the user question along with justification for the answer.'''

    answer: str
    justification: str


dict_schema = convert_to_openai_tool(AnswerWithJustification)
llm = ChatDatabricks(model="databricks-meta-llama-3-1-70b-instruct")
structured_llm = llm.with_structured_output(dict_schema)

structured_llm.invoke("What weighs more a pound of bricks or a pound of feathers")
# -> {
#     'answer': 'They weigh the same',
#     'justification': 'Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume and density of the two substances differ.'
# }

JSON mode, Pydantic schema (method=”json_mode”, include_raw=True):

from databricks_langchain import ChatDatabricks
from pydantic import BaseModel

class AnswerWithJustification(BaseModel):
    answer: str
    justification: str

llm = ChatDatabricks(model="databricks-meta-llama-3-1-70b-instruct")
structured_llm = llm.with_structured_output(
    AnswerWithJustification,
    method="json_mode",
    include_raw=True
)

structured_llm.invoke(
    "Answer the following question. "
    "Make sure to return a JSON blob with keys 'answer' and 'justification'."
    "What's heavier a pound of bricks or a pound of feathers?"
)
# -> {
#     'raw': AIMessage(content='{    "answer": "They are both the same weight.",    "justification": "Both a pound of bricks and a pound of feathers weigh one pound. The difference lies in the volume and density of the materials, not the weight." }'),
#     'parsed': AnswerWithJustification(answer='They are both the same weight.', justification='Both a pound of bricks and a pound of feathers weigh one pound. The difference lies in the volume and density of the materials, not the weight.'),
#     'parsing_error': None
# }

JSON mode, no schema (schema=None, method=”json_mode”, include_raw=True):

structured_llm = llm.with_structured_output(method="json_mode", include_raw=True)

structured_llm.invoke(
    "Answer the following question. "
    "Make sure to return a JSON blob with keys 'answer' and 'justification'."
    "What's heavier a pound of bricks or a pound of feathers?"
)
# -> {
#     'raw': AIMessage(content='{    "answer": "They are both the same weight.",    "justification": "Both a pound of bricks and a pound of feathers weigh one pound. The difference lies in the volume and density of the materials, not the weight." }'),
#     'parsed': {
#         'answer': 'They are both the same weight.',
#         'justification': 'Both a pound of bricks and a pound of feathers weigh one pound. The difference lies in the volume and density of the materials, not the weight.'
#     },
#     'parsing_error': None
# }
property endpoint: str
class databricks_langchain.DatabricksEmbeddings

Bases: Embeddings, BaseModel

Databricks embedding model integration.

Instantiate:

from databricks_langchain import DatabricksEmbeddings

embed = DatabricksEmbeddings(
    endpoint="databricks-bge-large-en",
)

Embed single text:

input_text = "The meaning of life is 42"
embed.embed_query(input_text)
[0.01605224609375, -0.0298309326171875, ...]
param documents_params: Dict[str, Any] = {}

The parameters to use for documents.

param endpoint: str [Required]

Name of Databricks Model Serving endpoint to query.

param query_params: Dict[str, Any] = {}

The parameters to use for the query.

param target_uri: str = 'databricks'

The target URI to use. Defaults to databricks

embed_documents(texts: List[str]) List[List[float]]

Embed search docs.

Parameters:

texts – List of text to embed.

Returns:

List of embeddings.

embed_query(text: str) List[float]

Embed query text.

Parameters:

text – Text to embed.

Returns:

Embedding.

class databricks_langchain.DatabricksVectorSearch(index_name: str, endpoint: str | None = None, embedding: Embeddings | None = None, text_column: str | None = None, doc_uri: str | None = None, primary_key: str | None = None, columns: List[str] | None = None, workspace_client: WorkspaceClient | None = None, client_args: Dict[str, Any] | None = None, include_score: bool = False)

Bases: VectorStore

Databricks vector store integration.

Parameters:
  • index_name – The name of the index to use. Format: β€œcatalog.schema.index”.

  • endpoint –

    The name of the Databricks Vector Search endpoint. If not specified, the endpoint name is automatically inferred based on the index name.

    Note

    If you are using databricks-vectorsearch version < 0.35, the endpoint parameter is required when initializing the vector store.

    vector_store = DatabricksVectorSearch(
        endpoint="<your-endpoint-name>",
        index_name="<your-index-name>",
        ...
    )
    

  • embedding – The embedding model. Required for direct-access index or delta-sync index with self-managed embeddings.

  • text_column – The name of the text column to use for the embeddings. Required for direct-access index or delta-sync index with self-managed embeddings. Make sure the text column specified is in the index.

  • columns – The list of column names to get when doing the search. Defaults to [primary_key, text_column].

  • client_args – Additional arguments to pass to the VectorSearchClient. Allows you to pass in values like service_principal_client_id and service_principal_client_secret to allow for service principal authentication instead of personal access token authentication.

Instantiate:

DatabricksVectorSearch supports two types of indexes:

  • Delta Sync Index automatically syncs with a source Delta Table, automatically and incrementally updating the index as the underlying data in the Delta Table changes.

  • Direct Vector Access Index supports direct read and write of vectors and metadata. The user is responsible for updating this table using the REST API or the Python SDK.

Also for delta-sync index, you can choose to use Databricks-managed embeddings or self-managed embeddings (via LangChain embeddings classes).

If you are using a delta-sync index with Databricks-managed embeddings:

from databricks_langchain.vectorstores import DatabricksVectorSearch

vector_store = DatabricksVectorSearch(index_name="<your-index-name>")

If you are using a direct-access index or a delta-sync index with self-managed embeddings, you also need to provide the embedding model and text column in your source table to use for the embeddings:

from langchain_openai import OpenAIEmbeddings

vector_store = DatabricksVectorSearch(
    index_name="<your-index-name>",
    embedding=OpenAIEmbeddings(),
    text_column="document_content",
)

Add Documents:

from langchain_core.documents import Document

document_1 = Document(page_content="foo", metadata={"baz": "bar"})
document_2 = Document(page_content="thud", metadata={"bar": "baz"})
document_3 = Document(page_content="i will be deleted :(")
documents = [document_1, document_2, document_3]
ids = ["1", "2", "3"]
vector_store.add_documents(documents=documents, ids=ids)

Delete Documents:

vector_store.delete(ids=["3"])

Note

The delete method is only supported for direct-access index.

Search:

results = vector_store.similarity_search(query="thud", k=1)
for doc in results:
    print(f"* {doc.page_content} [{doc.metadata}]")
*thud[{"id": "2"}]

Search with filter:

results = vector_store.similarity_search(query="thud", k=1, filter={"bar": "baz"})
for doc in results:
    print(f"* {doc.page_content} [{doc.metadata}]")
*thud[{"id": "2"}]

Search with score:

results = vector_store.similarity_search_with_score(query="qux", k=1)
for doc, score in results:
    print(f"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]")
* [SIM=0.748804] foo [{'id': '1'}]

Async:

# add documents
await vector_store.aadd_documents(documents=documents, ids=ids)
# delete documents
await vector_store.adelete(ids=["3"])
# search
results = vector_store.asimilarity_search(query="thud", k=1)
# search with score
results = await vector_store.asimilarity_search_with_score(query="qux", k=1)
for doc, score in results:
    print(f"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]")
* [SIM=0.748807] foo [{'id': '1'}]

Use as Retriever:

retriever = vector_store.as_retriever(
    search_type="mmr",
    search_kwargs={"k": 1, "fetch_k": 2, "lambda_mult": 0.5},
)
retriever.invoke("thud")
[Document(metadata={"id": "2"}, page_content="thud")]
property embeddings: Embeddings | None

Access the query embedding object if available.

classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: List[Dict] | None = None, **kwargs: Any) VST

Return VectorStore initialized from texts and embeddings.

Parameters:
  • texts – Texts to add to the vectorstore.

  • embedding – Embedding function to use.

  • metadatas – Optional list of metadatas associated with the texts. Default is None.

  • ids – Optional list of IDs associated with the texts.

  • kwargs – Additional keyword arguments.

Returns:

VectorStore initialized from texts and embeddings.

Return type:

VectorStore

add_texts(texts: Iterable[str], metadatas: List[Dict] | None = None, ids: List[Any] | None = None, **kwargs: Any) List[str]

Add texts to the index.

Note

This method is only supported for a direct-access index.

Parameters:
  • texts – List of texts to add.

  • metadatas – List of metadata for each text. Defaults to None.

  • ids – List of ids for each text. Defaults to None. If not provided, a random uuid will be generated for each text.

Returns:

List of ids from adding the texts into the index.

async aadd_texts(texts: Iterable[str], metadatas: List[dict] | None = None, **kwargs: Any) List[str]

Async run more texts through the embeddings and add to the vectorstore.

Parameters:
  • texts – Iterable of strings to add to the vectorstore.

  • metadatas – Optional list of metadatas associated with the texts. Default is None.

  • ids – Optional list

  • **kwargs – vectorstore specific parameters.

Returns:

List of ids from adding the texts into the vectorstore.

Raises:
  • ValueError – If the number of metadatas does not match the number of texts.

  • ValueError – If the number of ids does not match the number of texts.

delete(ids: List[Any] | None = None, **kwargs: Any) bool | None

Delete documents from the index.

Note

This method is only supported for a direct-access index.

Parameters:

ids – List of ids of documents to delete.

Returns:

True if successful.

Return docs most similar to query.

Parameters:
  • query – Text to look up documents similar to.

  • k – Number of Documents to return. Defaults to 4.

  • filter – Filters to apply to the query. Defaults to None.

  • query_type – The type of this query. Supported values are β€œANN” and β€œHYBRID”.

Returns:

List of Documents most similar to the embedding.

Async return docs most similar to query.

Parameters:
  • query – Input text.

  • k – Number of Documents to return. Defaults to 4.

  • **kwargs – Arguments to pass to the search method.

Returns:

List of Documents most similar to the query.

similarity_search_with_score(query: str, k: int = 4, filter: Dict[str, Any] | None = None, *, query_type: str | None = None, **kwargs: Any) List[Tuple[Document, float]]

Return docs most similar to query, along with scores.

Parameters:
  • query – Text to look up documents similar to.

  • k – Number of Documents to return. Defaults to 4.

  • filter – Filters to apply to the query. Defaults to None.

  • query_type – The type of this query. Supported values are β€œANN” and β€œHYBRID”.

Returns:

List of Documents most similar to the embedding and score for each.

async asimilarity_search_with_score(*args: Any, **kwargs: Any) List[Tuple[Document, float]]

Async run similarity search with distance.

Parameters:
  • *args – Arguments to pass to the search method.

  • **kwargs – Arguments to pass to the search method.

Returns:

List of Tuples of (doc, similarity_score).

similarity_search_by_vector(embedding: List[float], k: int = 4, filter: Any | None = None, *, query_type: str | None = None, query: str | None = None, **kwargs: Any) List[Document]

Return docs most similar to embedding vector.

Parameters:
  • embedding – Embedding to look up documents similar to.

  • k – Number of Documents to return. Defaults to 4.

  • filter – Filters to apply to the query. Defaults to None.

  • query_type – The type of this query. Supported values are β€œANN” and β€œHYBRID”.

Returns:

List of Documents most similar to the embedding.

async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) List[Document]

Async return docs most similar to embedding vector.

Parameters:
  • embedding – Embedding to look up documents similar to.

  • k – Number of Documents to return. Defaults to 4.

  • **kwargs – Arguments to pass to the search method.

Returns:

List of Documents most similar to the query vector.

similarity_search_by_vector_with_score(embedding: List[float], k: int = 4, filter: Any | None = None, *, query_type: str | None = None, query: str | None = None, **kwargs: Any) List[Tuple[Document, float]]

Return docs most similar to embedding vector, along with scores.

Note

This method is not supported for index with Databricks-managed embeddings.

Parameters:
  • embedding – Embedding to look up documents similar to.

  • k – Number of Documents to return. Defaults to 4.

  • filter – Filters to apply to the query. Defaults to None.

  • query_type – The type of this query. Supported values are β€œANN” and β€œHYBRID”.

Returns:

List of Documents most similar to the embedding and score for each.

Return docs selected using the maximal marginal relevance.

Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.

Note

This method is not supported for index with Databricks-managed embeddings.

Parameters:
  • query – Text to look up documents similar to.

  • k – Number of Documents to return. Defaults to 4.

  • fetch_k – Number of Documents to fetch to pass to MMR algorithm.

  • lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.

  • filter – Filters to apply to the query. Defaults to None.

  • query_type – The type of this query. Supported values are β€œANN” and β€œHYBRID”.

Returns:

List of Documents selected by maximal marginal relevance.

Async return docs selected using the maximal marginal relevance.

Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.

Parameters:
  • query – Text to look up documents similar to.

  • k – Number of Documents to return. Defaults to 4.

  • fetch_k – Number of Documents to fetch to pass to MMR algorithm. Default is 20.

  • lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.

  • **kwargs – Arguments to pass to the search method.

Returns:

List of Documents selected by maximal marginal relevance.

max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Any | None = None, *, query_type: str | None = None, **kwargs: Any) List[Document]

Return docs selected using the maximal marginal relevance.

Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.

Note

This method is not supported for index with Databricks-managed embeddings.

Parameters:
  • embedding – Embedding to look up documents similar to.

  • k – Number of Documents to return. Defaults to 4.

  • fetch_k – Number of Documents to fetch to pass to MMR algorithm.

  • lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.

  • filter – Filters to apply to the query. Defaults to None.

  • query_type – The type of this query. Supported values are β€œANN” and β€œHYBRID”.

Returns:

List of Documents selected by maximal marginal relevance.

async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) List[Document]

Async return docs selected using the maximal marginal relevance.

Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.

Parameters:
  • embedding – Embedding to look up documents similar to.

  • k – Number of Documents to return. Defaults to 4.

  • fetch_k – Number of Documents to fetch to pass to MMR algorithm. Default is 20.

  • lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.

  • **kwargs – Arguments to pass to the search method.

Returns:

List of Documents selected by maximal marginal relevance.

databricks_langchain.GenieAgent(genie_space_id, genie_agent_name: str = 'Genie', description: str = '', include_context: bool = False, client: WorkspaceClient | None = None)

Create a genie agent that can be used to query the API. If a description is not provided, the description of the genie space will be used.

class databricks_langchain.VectorSearchRetrieverTool

Bases: BaseTool, VectorSearchRetrieverToolMixin

A utility class to create a vector search-based retrieval tool for querying indexed embeddings. This class integrates with Databricks Vector Search and provides a convenient interface for building a retriever tool for agents.

param args_schema: Type[BaseModel] = <class 'databricks_ai_bridge.vector_search_retriever_tool.VectorSearchRetrieverToolInput'>

Pydantic model class to validate and parse the tool’s input arguments.

Args schema should be either:

  • A subclass of pydantic.BaseModel.

or - A subclass of pydantic.v1.BaseModel if accessing v1 namespace in pydantic 2 or - a JSON schema dict

param description: str = ''

Used to tell the model how/when/why to use the tool.

You can provide few-shot examples as a part of the description.

The description of the tool

param embedding: Embeddings | None = None

Embedding model for self-managed embeddings.

param name: str = ''

The unique name of the tool that clearly communicates its purpose.

The name of the tool

param text_column: str | None = None

The name of the text column to use for the embeddings. Required for direct-access index or delta-sync index with self-managed embeddings.

class databricks_langchain.UCFunctionToolkit(*, function_names: List[str] = None, tools_dict: Dict[str, UnityCatalogTool] = None, client: BaseFunctionClient | None = None, filter_accessible_functions: bool = False)

Bases: BaseModel

function_names: List[str]
tools_dict: Dict[str, UnityCatalogTool]
client: BaseFunctionClient | None
filter_accessible_functions: bool
class Config

Bases: object

arbitrary_types_allowed = True
classmethod validate_toolkit(values) Dict[str, Any]
static uc_function_to_langchain_tool(*, function_name: str, client: BaseFunctionClient | None = None, filter_accessible_functions: bool = False) UnityCatalogTool | None

Convert a UC function to Langchain StructuredTool

Parameters:
  • function_name – The full name of the function in the form of β€˜catalog.schema.function’

  • client – The client for managing functions, must be an instance of BaseFunctionClient

property tools: List[UnityCatalogTool]
class databricks_langchain.UnityCatalogTool

Bases: StructuredTool

param client_config: Dict[str, Any] = FieldInfo(default=PydanticUndefined, description='Configuration of the client for managing the tool', extra={})
param uc_function_name: str = FieldInfo(default=PydanticUndefined, description="The full name of the function in the form of 'catalog.schema.function'", extra={})
class databricks_langchain.DatabricksFunctionClient(client: WorkspaceClient | None = None, *, profile: str | None = None, execution_mode: str = 'serverless', **kwargs: Any)

Bases: BaseFunctionClient

Databricks UC function calling client

set_spark_session()

Initialize the spark session with serverless compute if not already active.

stop_spark_session()
initialize_spark_session()

Initialize the spark session with serverless compute. This method is called when the spark session is not active.

refresh_client_and_session()

Refreshes the databricks client and spark session if the session_id has been invalidated due to expiration of temporary credentials. If the client is running within an interactive Databricks notebook environment, the spark session is not terminated.

create_function(*, sql_function_body: str | None = None) FunctionInfo

Create a UC function with the given sql body or function info.

Note: databricks-connect is required to use this function, make sure its version is 15.1.0 or above to use

serverless compute.

Parameters:

sql_function_body – The sql body of the function. Defaults to None. It should follow the syntax of CREATE FUNCTION statement in Databricks. Ref: https://docs.databricks.com/en/sql/language-manual/sql-ref-syntax-ddl-create-sql-function.html#syntax

Returns:

The created function info.

Return type:

FunctionInfo

create_python_function(*, func: Callable[[...], Any], catalog: str, schema: str, replace: bool = False, dependencies: list[str] | None = None, environment_version: str = 'None') FunctionInfo

Create a Unity Catalog (UC) function directly from a Python function.

This API allows you to convert a Python function into a Unity Catalog User-Defined Function (UDF). It automates the creation of UC functions while ensuring that the Python function meets certain criteria and adheres to best practices.

Requirements:

  1. Type Annotations:
    • The Python function must use argument and return type annotations. These annotations are used

    to generate the SQL signature of the UC function. - Supported Python types and their corresponding UC types are as follows:

    Python Type | Unity Catalog Type |

    |----------------------|————————–| | int | LONG | | float | DOUBLE | | str | STRING | | bool | Boolean Operations β€” and, or, not | | Decimal | DECIMAL | | datetime.date | DATE | | datetime.timedelta | INTERVAL DAY TO SECOND | | datetime.datetime | TIMESTAMP | | list | ARRAY | | tuple | ARRAY | | dict | MAP | | bytes | Binary arithmetic operations |

    • Example of a valid function:

    ```python def my_function(a: int, b: str) -> float:

    return a + len(b)

    ```

    • Invalid function (missing type annotations):

    ```python def my_function(a, b):

    return a + len(b)

    ``` Attempting to create a UC function from a function without type hints will raise an error, as the system relies on type hints to generate the UC function’s signature.

    • For container types like list, tuple and dict, the inner types must be specified and must be

    uniform (Union types are not permitted). For example:

    ```python def my_function(a: List[int], b: Dict[str, float]) -> List[str]:

    return [str(x) for x in a]

    ```

    • var args and kwargs are not supported. All arguments must be explicitly defined in the function signature.

  2. Google Docstring Guidelines:
    • It is required to include detailed Python docstrings in your function to provide additional context.

    The docstrings will be used to auto-generate parameter descriptions and a function-level comment.

    • A function description must be provided at the beginning of the docstring (within the triple quotes)

    to describe the function’s purpose. This description will be used as the function-level comment in the UC function. The description must be included in the first portion of the docstring prior to any argument descriptions.

    • Parameter descriptions are optional but recommended. If provided, they should be included in the

    Google-style docstring. The parameter descriptions will be used to auto-generate detailed descriptions for each parameter in the UC function. The additional context provided by these argument descriptions can be useful for agent applications to understand context of the arguments and their purpose.

    • Only Google-style docstrings are supported for this auto-generation. For example:

    ```python def my_function(a: int, b: str) -> float:

    β€œβ€β€ Adds the length of a string to an integer.

    Args:

    a (int): The integer to add to. b (str): The string whose length will be added.

    Returns:

    float: The sum of the integer and the string length.

    β€œβ€β€ return a + len(b)

    ``` - If docstrings do not conform to Google-style for specifying arguments descriptions, parameter descriptions

    will default to "Parameter <name>", and no further information will be provided in the function comment for the given parameter.

    For examples of Google docstring guidelines, see [this link](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html)

  3. External Dependencies:
    • Unity Catalog UDFs are limited to Python standard libraries and Databricks-provided libraries. If your

    function relies on unsupported external dependencies, the created UC function may fail at runtime. - It is strongly recommended to test the created function by executing it before integrating it into GenAI or other tools.

Function Metadata: - Docstrings (if provided and Google-style) will automatically be included as detailed descriptions for function parameters as well as for the function itself, enhancing the discoverability of the utility of your UC function.

Example: ```python def example_function(x: int, y: int) -> float:

β€œβ€β€ Multiplies an integer by the length of a string.

Args:

x (int): The number to be multiplied. y (int): A string whose length will be used for multiplication.

Returns:

float: The product of the integer and the string length.

β€œβ€β€ return x * len(y)

client.create_python_function(

func=example_function, catalog=”my_catalog”, schema=”my_schema”

)

Overwriting a function: - If a function with the same name already exists in the specified catalog and schema, the function will not be created by default. To overwrite the existing function, set the replace parameter to True.

param func:

The Python function to convert into a UDF.

param catalog:

The catalog name in which to create the function.

param schema:

The schema name in which to create the function.

param replace:

Whether to replace the function if it already exists. Defaults to False.

param dependencies:

A list of external dependencies required by the function. Defaults to an empty list. Note that the dependencies parameter is not supported in all runtimes. Ensure that you are using a runtime that supports environment and dependency declaration prior to creating a function that defines dependencies. Standard PyPI package declarations are supported (i.e., requests>=2.25.1).

param environment_version:

The version of the environment in which the function will be executed. Defaults to β€˜None’. Note that the environment_version parameter is not supported in all runtimes. Ensure that you are using a runtime that supports environment and dependency declaration prior to creating a function that declares an environment verison.

returns:

Metadata about the created function, including its name and signature.

rtype:

FunctionInfo

create_wrapped_function(*, primary_func: Callable[[...], Any], functions: list[Callable[[...], Any]], catalog: str, schema: str, replace=False, dependencies: list[str] | None = None, environment_version: str = 'None') FunctionInfo

Create a wrapped function comprised of a primary_func function and in-lined wrapped Functions within the primary_func body.

Note: databricks-connect is required to use this function, make sure its version is 15.1.0 or above to use

serverless compute.

Parameters:
  • primary_func – The primary function to be wrapped.

  • functions – A list of functions to be wrapped inline within the body of primary_func.

  • catalog – The catalog name.

  • schema – The schema name.

  • replace – Whether to replace the function if it already exists. Defaults to False.

  • dependencies – A list of external dependencies required by the function. Defaults to an empty list. Note that the dependencies parameter is not supported in all runtimes. Ensure that you are using a runtime that supports environment and dependency declaration prior to creating a function that defines dependencies. Standard PyPI package declarations are supported (i.e., requests>=2.25.1).

  • environment_version – The version of the environment in which the function will be executed. Defaults to β€˜None’. Note that the environment_version parameter is not supported in all runtimes. Ensure that you are using a runtime that supports environment and dependency declaration prior to creating a function that declares an environment verison.

Returns:

Metadata about the created function, including its name and signature.

Return type:

FunctionInfo

get_function(function_name: str, **kwargs: Any) FunctionInfo

Get a function by its name.

Parameters:
  • function_name – The name of the function to get.

  • kwargs – additional key-value pairs to include when getting the function.

  • are (Allowed keys for retrieving functions)

  • include_browse (-) – bool (default to None) Whether to include functions in the response for which the principal can only access selective metadata for.

Note

The function name shouldn’t be *, to get all functions in a catalog and schema, please use list_functions API instead.

Returns:

The function info.

Return type:

FunctionInfo

list_functions(catalog: str, schema: str, max_results: int | None = None, page_token: str | None = None, include_browse: bool | None = None) PagedList[FunctionInfo]

List functions in a catalog and schema.

Parameters:
  • catalog – The catalog name.

  • schema – The schema name.

  • max_results – The maximum number of functions to return. Defaults to None.

  • page_token – The token for the next page. Defaults to None.

  • include_browse – Whether to include functions in the response for which the

  • None. (principal can only access selective metadata for. Defaults to)

Returns:

The paginated list of function infos.

Return type:

PageList[FunctionInfo]

execute_function(function_name: str, parameters: Dict[str, Any] | None = None, **kwargs: Any) FunctionExecutionResult

Execute a UC function by name with the given parameters.

Parameters:
  • function_name – The name of the function to execute.

  • parameters – The parameters to pass to the function. Defaults to None.

  • kwargs –

    additional key-value pairs to include when executing the function. Allowed keys for retrieving functions are: - include_browse: bool (default to False)

    Whether to include functions in the response for which the principal can only access selective metadata for.

    Allowed keys for executing functions are: - wait_timeout: str (default to 30s)

    The time in seconds the call will wait for the statement’s result set as Ns, where N can be set to 0 or to a value between 5 and 50.

    When set to 0s, the statement will execute in asynchronous mode and the call will not wait for the execution to finish. In this case, the call returns directly with PENDING state and a statement ID which can be used for polling with :method:statementexecution/getStatement.

    When set between 5 and 50 seconds, the call will behave synchronously up to this timeout and wait for the statement execution to finish. If the execution finishes within this time, the call returns immediately with a manifest and result data (or a FAILED state in case of an execution error). If the statement takes longer to execute, on_wait_timeout determines what should happen after the timeout is reached.

    • row_limit: int (default to 100)

      Applies the given row limit to the statement’s result set, but unlike the LIMIT clause in SQL, it also sets the truncated field in the response to indicate whether the result was trimmed due to the limit or not.

    • byte_limit: int (default to 1048576 = 1MB)

      Applies the given byte limit to the statement’s result size. Byte counts are based on internal data representations and might not match the final size in the requested format. If the result was truncated due to the byte limit, then truncated in the response is set to true. When using EXTERNAL_LINKS disposition, a default byte_limit of 100 GiB is applied if byte_limit is not explcitly set.

Returns:

The result of executing the function.

Return type:

FunctionExecutionResult

delete_function(function_name: str, force: bool | None = None) None

Delete a function by its full name.

Parameters:
  • function_name – The full name of the function to delete. It should be in the format of β€œcatalog.schema.function_name”.

  • force – Force deletion even if the function is not empty. This parameter is used by underlying databricks workspace client when deleting a function. If it is None then the parameter is not included in the request. Defaults to None.

to_dict()

Store the client configuration in a dictionary. Sensitive information should be excluded.

classmethod from_dict(config: Dict[str, Any])
get_function_source(function_name: str) str

Returns the Python callable definition as a string for an EXTERNAL Python function that is stored within Unity Catalog. This function can only parse and extract the full callable definition for Python functions and cannot be used on SQL or TABLE functions.

Parameters:

function_name – The name of the function to retrieve the Python callable definition for.

Returns:

The Python callable definition as a string.

Return type:

str

get_function_as_callable(function_name: str, register_function: bool = True, namespace: dict[str, Any] | None = None) Callable[[...], Any]

Returns the Python callable for an EXTERNAL Python function that is stored within Unity Catalog. This function can only parse and extract the full callable definition for Python functions and cannot be used on SQL or TABLE functions.

Parameters:
  • function_name – The name of the function to retrieve the Python callable for.

  • register_function – Whether to register the function in the namespace. Defaults to True.

  • namespace – The namespace to register the function in. Defaults to None (global)

Returns:

The Python callable for the function.

Return type:

Callable[…, Any]

databricks_langchain.set_uc_function_client(client: BaseFunctionClient) None