Databricks FeatureEngineeringClient
- class databricks.feature_engineering.client.FeatureEngineeringClient(*, model_registry_uri: Optional[str] = None)
Bases:
objectClient for interacting with the Databricks Feature Engineering in Unity Catalog.
Note
Use
Databricks FeatureStoreClientfor workspace feature tables in hive metastore- create_feature(*, source: DataSource, inputs: List[str], function: Union[Function, str], time_window: Union[TimeWindow, Dict[str, timedelta]], catalog_name: str, schema_name: str, name: Optional[str] = None, description: Optional[str] = None, filter_condition: Optional[str] = None) Feature
Note
Experimental: This function may change or be removed in a future release without warning.
Create a Feature instance with comprehensive validation.
- Parameters
source – Required. The data source for this feature (DeltaTableSource or DataFrameSource)
inputs – Required. List of column names from the source to use as input
function – Required. The aggregation function to apply (Function instance or string name)
time_window – Required. The time window for the aggregation (TimeWindow instance or dict with ‘duration’ and optional ‘offset’)
catalog_name – Required. The catalog name for the feature
schema_name – Required. The schema name for the feature
name – Optional name for the feature
description – Optional description of the feature
filter_condition – Optional. SQL WHERE clause to filter source data before aggregation. Example: “amount > 100” or “status = ‘completed’”
- Returns
A validated Feature instance
- Raises
ValueError – If any validation fails
- get_feature(*, full_name: str) Feature
Retrieve a feature definition from Unity Catalog.
- delete_feature(*, full_name: str) None
Note
Experimental: This function may change or be removed in a future release without warning.
Delete a feature from Unity Catalog.
- Parameters
full_name – The full name of the feature in format ‘<catalog>.<schema>.<feature_name>’
- list_materialized_features(*, feature_name: Optional[str] = None, max_results: int = 100) List[MaterializedFeature]
List materialized features in the user’s Unity Catalog metastore.
- Parameters
feature_name – Optional. Filter by feature name. If specified, only materialized features materialized from this feature will be returned.
max_results – The maximum number of features to return across the user’s metastore, default is 100. Specifying a high number may result in a long-running operation.
- Returns
List of MaterializedFeature instances
- delete_materialized_feature(*, materialized_feature: MaterializedFeature) None
Delete a materialized feature
- Parameters
materialized_feature – The materialized feature to delete.
- grant_feature_serving_permissions(*, features: List[Feature], users: List[str]) None
Note
Experimental: This function may change or be removed in a future release without warning.
Grant permissions on online tables (materialized feature tables) of the given features to specified users.
This function is a temporary solution to address permission gaps when deploying models that use materialized features. The person deploying a model may have permission on the Feature itself, but not on the online table it is materialized to (if they are not the one who materialized the features).
Eventually calling this API will be unnecessary as the permissions will automatically be in place once a long-term permission propagation solution is implemented.
For each Feature, this method grants:
USE_CATALOG on the catalog
USE_SCHEMA on the schema
SELECT on the online table
- Parameters
features – List of Feature objects to grant online table access for
users – List of user identifiers (e.g., email addresses) to grant permissions to
- Raises
Exception – If any validation fails or if any grant fails
- create_feature_serving_endpoint(*, name: Optional[str] = None, config: Optional[EndpointCoreConfig] = None, **kwargs) FeatureServingEndpoint
Creates a Feature Serving Endpoint
- Parameters
name – The name of the endpoint. Must only contain alphanumerics and dashes.
config – Configuration of the endpoint, including features, workload_size, etc.
- create_feature_spec(*, name: str, features: List[Union[FeatureLookup, FeatureFunction]], exclude_columns: Optional[List[str]] = None) FeatureSpecInfo
Creates a feature specification in Unity Catalog. The feature spec can be used for serving features & functions.
- Parameters
name – The name of the feature spec.
features – List of FeatureLookups and FeatureFunctions to include in the feature spec.
exclude_columns – List of columns to drop from the final output.
- update_feature_spec(*, name: str, owner: str) None
Update the owner of a feature spec.
- Parameters
name – The name of the feature spec.
owner – The new owner of the feature spec.
- delete_feature_spec(*, name: str) None
Delete a feature spec.
- Parameters
name – The name of the feature spec.
- create_table(*, name: str, primary_keys: Union[str, List[str]], df: Optional[DataFrame] = None, timeseries_column: Optional[str] = None, partition_columns: Optional[Union[str, List[str]]] = None, schema: Optional[StructType] = None, description: Optional[str] = None, tags: Optional[Dict[str, str]] = None, **kwargs) FeatureTable
Create and return a feature table with the given name and primary keys.
The returned feature table has the given name and primary keys. Uses the provided
schemaor the inferred schema of the provideddf. Ifdfis provided, this data will be saved in a Delta table. Supported data types for features are:IntegerType,LongType,FloatType,DoubleType,StringType,BooleanType,DateType,TimestampType,ShortType,ArrayType,MapType, andBinaryType,DecimalType, andStructType.- Parameters
name – A feature table name. The format is
<catalog_name>.<schema_name>.<table_name>, for exampleml.dev.user_features.primary_keys – The feature table’s primary keys. If multiple columns are required, specify a list of column names, for example
['customer_id', 'region'].df – Data to insert into this feature table. The schema of
dfwill be used as the feature table schema.timeseries_column – Column containing the event time associated with feature value. Timeseries column should be part of the primary keys. Combined, the timeseries column and other primary keys of the feature table uniquely identify the feature value for an entity at a point in time.
partition_columns –
Columns used to partition the feature table. If a list is provided, column ordering in the list will be used for partitioning.
Note
When choosing partition columns for your feature table, use columns that do not have a high cardinality. An ideal strategy would be such that you expect data in each partition to be at least 1 GB. The most commonly used partition column is a
date.Additional info: Choosing the right partition columns for Delta tables
schema – Feature table schema. Either
schemaordfmust be provided.description – Description of the feature table.
tags – Tags to associate with the feature table.
- create_training_set(*, df: DataFrame, feature_lookups: Optional[List[Union[FeatureLookup, FeatureFunction]]] = None, feature_spec: Optional[str] = None, features: Optional[List[Feature]] = None, label: Optional[Union[str, List[str]]], exclude_columns: Optional[List[str]] = None, use_spark_native_join: bool = False, **kwargs) TrainingSet
Create a
TrainingSetusing feature_lookups, feature_spec, or features.- Parameters
df – The
DataFrameused to join features into.feature_lookups – List of features to use in the
TrainingSet.FeatureLookupsare joined into theDataFrame, andFeatureFunctionsare computed on-demand.feature_lookupscannot be specified iffeature_specorfeaturesis provided.feature_spec – Full name of the FeatureSpec in Unity Catalog.
feature_speccannot be specified iffeature_lookupsorfeaturesis provided.features – List of
Featureobjects to use in the training set.featurescannot be specified iffeature_lookupsorfeature_specis provided.label – Names of column(s) in
DataFramethat contain training set labels. To create a training set without a label field, i.e. for unsupervised training set, specify label = None.exclude_columns – Names of the columns to drop from the
TrainingSetDataFrame.use_spark_native_join – Use spark to optimize table joining performance. The optimization is only applicable when
Photonis enabled.
- Returns
A
TrainingSetobject.
- delete_feature_serving_endpoint(*, name=None, **kwargs) None
- delete_feature_table_tag(*, name: str, key: str) None
Delete the tag associated with the feature table. Deleting a non-existent tag will emit a warning.
- Parameters
name – the feature table name.
key – the tag key to delete.
- drop_online_table(name: str, online_store: OnlineStoreSpec) None
Drop a table in an online store.
This API first attempts to make a call to the online store provider to drop the table. If successful, it then deletes the online store from the feature catalog.
- Parameters
name – Name of feature table associated with online store table to drop.
online_store – Specification of the online store.
Note
Deleting an online published table can lead to unexpected failures in downstream dependencies. Ensure that the online table being dropped is no longer used for Model Serving feature lookup or any other use cases.
- drop_table(*, name: str) None
Delete the specified feature table. This API also drops the underlying Delta table.
- Parameters
name – A feature table name. The format is
<catalog_name>.<schema_name>.<table_name>, for exampleml.dev.user_features.
Note
Deleting a feature table can lead to unexpected failures in upstream producers and downstream consumers (models, endpoints, and scheduled jobs). You must delete any existing published online stores separately.
- get_feature_serving_endpoint(*, name=None, **kwargs) FeatureServingEndpoint
- get_table(*, name: str) FeatureTable
Get a feature table’s metadata.
- Parameters
name – A feature table name. The format is
<catalog_name>.<schema_name>.<table_name>, for exampleml.dev.user_features.
- log_model(*, model: Any, artifact_path: str, flavor: module, training_set: Optional[TrainingSet] = None, registered_model_name: Optional[str] = None, await_registration_for: int = 300, infer_input_example: bool = False, **kwargs)
Log an MLflow model packaged with feature lookup information.
Note
The
DataFramereturned byTrainingSet.load_df()must be used to train the model. If it has been modified (for example data normalization, add a column, and similar), these modifications will not be applied at inference time, leading to training-serving skew.- Parameters
model – Model to be saved. This model must be capable of being saved by
flavor.save_model. See the MLflow Model API.artifact_path – Run-relative artifact path.
flavor – MLflow module to use to log the model.
flavorshould have typeModuleType. The module must have a methodsave_model, and must support thepython_functionflavor. For example,mlflow.sklearn,mlflow.xgboost, and similar.training_set – The
TrainingSetused to train this model.registered_model_name – When provided, create a model version under
registered_model_name, also creating a registered model if one with the given name does not exist.await_registration_for – Number of seconds to wait for the model version to finish being created and is in
READYstatus. By default, the function waits for five minutes. Specify0orNoneto skip waiting.infer_input_example –
Note
Experimental: This argument may change or be removed in a future release without warning.
Automatically log an input example along with the model, using supplied training data. Defaults to
False.
- Kwargs
If other keyword arguments are provided, in most cases, they are passed to the underlying MLflow API flavor.save_model() when saving and registering the model.
Note
signatureis not recommended and it’s preferred to useinfer_input_example.output_schema: When logging a model without labels in the training set, you must passoutput_schematolog_modelto suggest the output schema explicitly. For example:from mlflow.types import ColSpec, DataType, Schema output_schema = Schema([ColSpec(DataType.???)]) # Refer to mlflow signature types for the right choice of type here ... fe.log_model( ... output_schema=output_schema )
- Returns
- publish_table(*, online_store: Union[OnlineStoreSpec, DatabricksOnlineStore], source_table_name: Optional[str] = None, online_table_name: Optional[str] = None, publish_mode: str = 'TRIGGERED', filter_condition: Optional[str] = None, mode: str = 'merge', streaming: bool = False, checkpoint_location: Optional[str] = None, trigger: Dict[str, Any] = {'processingTime': '5 minutes'}, features: Optional[Union[str, List[str]]] = None, **kwargs) Optional[Union[StreamingQuery, PublishedTable]]
Publish a feature table to an online store.
- Parameters
source_table_name – Name of the feature table. This is a required parameter.
online_table_name – Name of the online table. This is a required parameter when publishing to Databricks Online Store.
online_store – Specification of the online store. This is a required parameter.
publish_mode – supported modes are
"SNAPSHOT","CONTINUOUS", and"TRIGGERED". Default is"TRIGGERED".
Note
Change Data Feed (CDF) must be enabled for CONTINUOUS and TRIGGERED modes.
- Parameters
filter_condition – A SQL expression using feature table columns that filters feature rows prior to publishing to the online store. For example,
"dt > '2020-09-10'". This is analogous to runningdf.filteror aWHEREcondition in SQL on a feature table prior to publishing.mode –
Specifies the behavior when data already exists in this feature table. The only supported mode is
"merge", with which the new data will be merged in, under these conditions:If a key exists in the online table but not the offline table, the row in the online table is unmodified.
If a key exists in the offline table but not the online table, the offline table row is inserted into the online table.
If a key exists in both the offline and the online tables, the online table row will be updated.
streaming – If
True, streams data to the online store.checkpoint_location – Sets the Structured Streaming
checkpointLocationoption. By setting acheckpoint_location, Spark Structured Streaming will store progress information and intermediate state, enabling recovery after failures. This parameter is only supported whenstreaming=True.trigger – If
streaming=True,triggerdefines the timing of stream data processing. The dictionary will be unpacked and passed toDataStreamWriter.triggeras arguments. For example,trigger={'once': True}will result in a call toDataStreamWriter.trigger(once=True).features –
Specifies the feature column(s) to be published to the online store. The selected features must be a superset of existing online store features. Primary key columns and timestamp key columns will always be published.
Note
When
featuresis not set, the whole feature table will be published.
- Returns
If
streaming=True, returns a PySparkStreamingQuery,Noneotherwise.
- read_table(*, name: str, **kwargs) DataFrame
Read the contents of a feature table.
- Parameters
name – A feature table name. The format is
<catalog_name>.<schema_name>.<table_name>, for exampleml.dev.user_features.- Returns
The feature table contents, or an exception will be raised if this feature table does not exist.
- score_batch(*, model_uri: str, df: DataFrame, result_type: str = 'double', env_manager: str = 'local', params: Optional[dict[str, Any]] = None, use_spark_native_join: bool = False, **kwargs) DataFrame
Evaluate the model on the provided
DataFrame.Additional features required for model evaluation will be automatically retrieved from feature tables.
The model must have been logged with
FeatureEngineeringClient.log_model(), which packages the model with feature metadata. Unless present indf, these features will be looked up from feature tables and joined withdfprior to scoring the model.If a feature is included in
df, the provided feature values will be used rather than those stored in feature tables.For example, if a model is trained on two features
account_creation_dateandnum_lifetime_purchases, as in:feature_lookups = [ FeatureLookup( table_name = 'trust_and_safety.customer_features', feature_name = 'account_creation_date', lookup_key = 'customer_id', ), FeatureLookup( table_name = 'trust_and_safety.customer_features', feature_name = 'num_lifetime_purchases', lookup_key = 'customer_id' ), ] with mlflow.start_run(): training_set = fe.create_training_set( df, feature_lookups = feature_lookups, label = 'is_banned', exclude_columns = ['customer_id'] ) ... fe.log_model( model, "model", flavor=mlflow.sklearn, training_set=training_set, registered_model_name="example_model" )
Then at inference time, the caller of
FeatureEngineeringClient.score_batch()must pass aDataFramethat includescustomer_id, thelookup_keyspecified in theFeatureLookupsof thetraining_set. If theDataFramecontains a columnaccount_creation_date, the values of this column will be used in lieu of those in feature tables. As in:# batch_df has columns ['customer_id', 'account_creation_date'] predictions = fe.score_batch( 'models:/example_model/1', batch_df )
- Parameters
model_uri –
The location, in URI format, of the MLflow model logged using
FeatureEngineeringClient.log_model(). One of:runs:/<mlflow_run_id>/run-relative/path/to/modelmodels:/<model_name>/<model_version>models:/<model_name>/<stage>
For more information about URI schemes, see Referencing Artifacts.
df –
The
DataFrameto score the model on. Features from feature tables will be joined withdfprior to scoring the model.dfmust:1. Contain columns for lookup keys required to join feature data from feature tables, as specified in the
feature_spec.yamlartifact.2. Contain columns for all source keys required to score the model, as specified in the
feature_spec.yamlartifact.3. Not contain a column
prediction, which is reserved for the model’s predictions.dfmay contain additional columns.Streaming DataFrames are not supported.
result_type – The return type of the model. See
mlflow.pyfunc.spark_udf()result_type.env_manager – The environment manager to use in order to create the python environment for model inference. See
mlflow.pyfunc.spark_udf()env_manager.params – Additional parameters to pass to the model for inference.
use_spark_native_join – Use spark to optimize table joining performance. The optimization is only applicable when
Photonis enabled.
- Returns
A
DataFramecontaining:All columns of
df.All feature values retrieved from feature tables.
A column
predictioncontaining the output of the model.
- set_feature_table_tag(*, name: str, key: str, value: str) None
Create or update a tag associated with the feature table. If the tag with the corresponding key already exists, its value will be overwritten with the new value.
- Parameters
name – the feature table name
key – tag key
value – tag value
- write_table(*, name: str, df: DataFrame, mode: str = 'merge', checkpoint_location: Optional[str] = None, trigger: Dict[str, Any] = {'processingTime': '5 seconds'}) Optional[StreamingQuery]
Writes to a feature table.
If the input
DataFrameis streaming, will create a write stream.- Parameters
name – A feature table name. The format is
<catalog_name>.<schema_name>.<table_name>, for exampleml.dev.user_features.df – Spark
DataFramewith feature data. Raises an exception if the schema does not match that of the feature table.mode –
There is only one supported write mode:
"merge"upserts the rows indfinto the feature table. Ifdfcontains columns not present in the feature table, these columns will be added as new features.
If you want to overwrite a table, run SQL
DELETE FROM <table name>;to delete all rows, or drop and recreate the table before calling this method.checkpoint_location – Sets the Structured Streaming
checkpointLocationoption. By setting acheckpoint_location, Spark Structured Streaming will store progress information and intermediate state, enabling recovery after failures. This parameter is only supported when the argumentdfis a streamingDataFrame.trigger – If
df.isStreaming,triggerdefines the timing of stream data processing, the dictionary will be unpacked and passed toDataStreamWriter.triggeras arguments. For example,trigger={'once': True}will result in a call toDataStreamWriter.trigger(once=True).
- Returns
If
df.isStreaming, returns a PySparkStreamingQuery.Noneotherwise.
- compute_features(features: List[Feature]) DataFrame
Computes the specified features and returns a DataFrame containing the results.
Note: This method only supports computing features from a single source.
- Parameters
features – The features to compute.
- aggregate_features(*, features: FeatureAggregations) DataFrame
Computes the specified aggregations and returns a DataFrame containing the results.
- Parameters
features – The aggregation specification to compute.
- create_materialized_view(*, features: FeatureAggregations, output_table: str, schedule: Optional[CronSchedule]) MaterializedViewInfo
Creates and runs a pipeline that materializes the given feature aggregation specification into a materialized view.
- Parameters
features – The aggregation specification to materialize.
output_table – The name of the output materialized view.
schedule – The schedule at which to run the materialization pipeline. If not provided, the pipeline can only be run manually.
- create_pipeline(*, features: List[Feature], offline_store_config: OfflineStoreConfig, schedule: Optional[CronSchedule] = None, start_time: Optional[datetime] = None) List[MaterializedViewInfo]
Creates and runs a pipeline that materializes the given features into tables.
Features are grouped by (source, granularity) and each unique combination creates a separate materialized view.
Restrictions: - Supports SlidingWindow and TumblingWindow features - Features within same (source, granularity) group must have same granularity
(slide_duration for SlidingWindow, window_duration for TumblingWindow)
All features must use a single column for aggregation
ContinuousWindow not supported, at this time
- Parameters
features – List of features to materialize
offline_store_config – The offline store configuration containing catalog, schema, and table prefix
schedule – The schedule at which to run the pipeline (applied to all created views)
start_time – The earliest time to generate features from. If not provided, will use the minimum timestamp from the source table’s timeseries column
- Returns
List of MaterializedViewInfo, one per unique (source, granularity) combination
- build_model_env(model_uri: str, save_path: str) str
Prebuild the model Python environment required by the given model and generate an archive file saved to the specified
save_path. The resulting environment can then be used inFeatureEngineeringClient.score_batch()as theprebuilt_env_uriparameter.- Parameters
model_uri – URI of the model used to build the Python environment.
save_path – Directory path to save the prebuilt model environment archive file. This can be a local directory path, a mounted DBFS path (e.g., ‘/dbfs/…’), or a mounted UC volume path (e.g., ‘/Volumes/…’).
- Returns
The path of the archive file containing the Python environment data.
- create_online_store(*, name: str, capacity: str, read_replica_count: Optional[int] = None, usage_policy_id: Optional[str] = None) DatabricksOnlineStore
Create an Online Feature Store.
- Parameters
name – The name of the online store.
capacity – The capacity of the online store. Valid values are “CU_1”, “CU_2”, “CU_4”, “CU_8”.
read_replica_count – Optional. The number of read replicas for the online store.
usage_policy_id – Optional. ID of the usage policy to associate with the online store used for cost tracking.
- Returns
The created online store.
- get_online_store(*, name: str) Optional[DatabricksOnlineStore]
Get an Online Feature Store.
Note: this method is experimental and is expected to be removed in version 0.13.0. :param name: The name of the online store. :return: The retrieved online store, or None if not found.
- update_online_store(*, name: str, capacity: ~typing.Union[str, ~databricks.feature_engineering.client._UnsetType] = <UNSET>, read_replica_count: ~typing.Union[int, ~databricks.feature_engineering.client._UnsetType] = <UNSET>, usage_policy_id: ~typing.Union[str, ~databricks.feature_engineering.client._UnsetType] = <UNSET>) DatabricksOnlineStore
Update an Online Feature Store. Only the fields specified will be updated. Fields that are not specified will remain unchanged.
- Parameters
name – The name of the online store.
capacity – Optional. The capacity of the online store. Valid values are “CU_1”, “CU_2”, “CU_4”, “CU_8”.
read_replica_count – Optional. The number of read replicas for the online store.
usage_policy_id – Optional. ID of the usage policy to associate with the online store used for cost tracking.
- Returns
The updated online store.
- delete_online_store(*, name: str) None
Delete an Online Feature Store.
- Parameters
name – The name of the online store.
- Returns
None.
- list_online_stores() List[DatabricksOnlineStore]
List available Online Feature Stores. These are online stores that are associated with a feature store. Note that online stores not associated with a feature store will be excluded from the output.
- Returns
A list of DatabricksOnlineStore objects.
- materialize_features(*, features: List[Feature], offline_config: OfflineStoreConfig, online_config: Optional[OnlineStoreConfig] = None, pipeline_state: Union[MaterializedFeaturePipelineScheduleState, str], cron_schedule: Optional[str] = None) Optional[List[MaterializedFeature]]
Direct Databricks to materialize a list of feature definitions stored in Unity Catalog to an offline (Delta Live Table) store and/or an online store for model serving. Note that the features must already exist in Unity Catalog as functions, which can be created using
FeatureEngineeringClient.create_feature().If both offline_config and online_config are provided, there will be two materialized features created and returned for each feature provided.
- Parameters
features – The list of features to materialize.
offline_config – The offline store configuration. Required.
online_config – The online store configuration. Optional.
pipeline_state – A supported materialized feature pipeline schedule state, currently only “ACTIVE” is supported.
cron_schedule – The cron schedule for the materialization pipeline. May be omitted in future pipeline states.
- Returns
The list of materialized features.
- create_kafka_config(*, name: str, bootstrap_servers: str, subscription_mode: SubscriptionMode, auth_config: AuthConfig, key_schema: Optional[SchemaConfig] = None, value_schema: Optional[SchemaConfig] = None, extra_options: Optional[Dict[str, str]] = None) KafkaConfig
Note
Experimental: This function may change or be removed in a future release without warning.
Create a Kafka configuration for streaming feature ingestion.
- Parameters
name – Required. A unique name for the Kafka configuration within the metastore. Can be distinct from the topic name.
bootstrap_servers – Required. A comma-separated list of host/port pairs pointing to the Kafka cluster (e.g. “host1:port1,host2:port2”). See https://spark.apache.org/docs/latest/streaming/structured-streaming-kafka-integration.html for more details on this parameter.
subscription_mode – Required. A SubscriptionMode instance specifying how to select Kafka topics to consume from. Supports three modes: subscribe (comma-separated topic list), subscribe_pattern (regex pattern), or assign (JSON with specific topic-partitions). See https://spark.apache.org/docs/latest/streaming/structured-streaming-kafka-integration.html for more details on this parameter.
auth_config – Required. An AuthConfig instance containing authentication configuration for connecting to Kafka topics. Supports Unity Catalog service credentials. See https://docs.databricks.com/aws/en/connect/unity-catalog/cloud-services for more details on UC service credentials.
key_schema – Optional. A SchemaConfig instance defining the schema for extracting message keys from Kafka topics. Supports IEFT JSON Schema format (https://json-schema.org/). At least one of key_schema or value_schema must be provided.
value_schema – Optional. A SchemaConfig instance defining the schema for extracting message values from Kafka topics. Supports IEFT JSON Schema format (https://json-schema.org/). At least one of key_schema or value_schema must be provided.
extra_options – Optional. A dictionary of additional Kafka consumer configuration options. Keys should be source options
- or Kafka consumer options (e.g. {“kafka.security.protocol”: “SASL_SSL”}). See
https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html for more details on optional configurations.
- Returns
KafkaConfig instance containing the created configuration.
- get_kafka_config(*, name: str) KafkaConfig
Note
Experimental: This function may change or be removed in a future release without warning.
Retrieve a Kafka configuration by name within the caller’s metastore.
- Parameters
name – Required. The name of the Kafka configuration to retrieve.
- Returns
KafkaConfig instance containing the configuration.
- list_kafka_configs(*, max_results: int = 50, include_schemas: bool = False) List[KafkaConfig]
Note
Experimental: This function may change or be removed in a future release without warning.
List Kafka configurations in the current metastore.
- Parameters
max_results – The maximum number of configs to return, default is 50. Specifying a high number may result in a long-running operation.
include_schemas – Whether to include schema information in the response, default is False. Schemas may be large and can be retrieved individually instead using
FeatureEngineeringClient.get_kafka_config(). Specifying True may result in a long-running operation.
- Returns
List of KafkaConfig instances.
- delete_kafka_config(*, name: str) None
Note
Experimental: This function may change or be removed in a future release without warning.
Delete a Kafka configuration.
- Parameters
name – Required. The name of the Kafka configuration to delete.
Note
The caller must be the creator of the Kafka configuration in order to delete it.