Documentation
¶
Overview ¶
Package adapt adds functionality related to converting bigquery representations like schema and data type representations.
It is EXPERIMENTAL and subject to change or removal without notice.
Index ¶
- func BQSchemaToStorageTableSchema(in bigquery.Schema) (*storagepb.TableSchema, error)
- func NormalizeDescriptor(in protoreflect.MessageDescriptor) (*descriptorpb.DescriptorProto, error)
- func StorageSchemaToProto2Descriptor(inSchema *storagepb.TableSchema, scope string) (protoreflect.Descriptor, error)
- func StorageSchemaToProto3Descriptor(inSchema *storagepb.TableSchema, scope string) (protoreflect.Descriptor, error)
- func StorageSchemaToProtoDescriptorWithOptions(inSchema *storagepb.TableSchema, scope string, opts ...ProtoConversionOption) (protoreflect.Descriptor, error)
- func StorageTableSchemaToBQSchema(in *storagepb.TableSchema) (bigquery.Schema, error)
- type ProtoConversionOption
- type ProtoMapping
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func BQSchemaToStorageTableSchema ¶
func BQSchemaToStorageTableSchema(in bigquery.Schema) (*storagepb.TableSchema, error)
BQSchemaToStorageTableSchema converts a bigquery Schema into the protobuf-based TableSchema used by the BigQuery Storage WriteClient.
func NormalizeDescriptor ¶ added in v1.22.0
func NormalizeDescriptor(in protoreflect.MessageDescriptor) (*descriptorpb.DescriptorProto, error)
NormalizeDescriptor builds a self-contained DescriptorProto suitable for communicating schema information with the BigQuery Storage write API. It's primarily used for cases where users are interested in sending data using a predefined protocol buffer message.
The storage API accepts a single DescriptorProto for decoding message data. In many cases, a message is comprised of multiple independent messages, from the same .proto file or from multiple sources. Rather than being forced to communicate all these messages independently, what this method does is rewrite the DescriptorProto to inline all messages as nested submessages. As the backend only cares about the types and not the namespaces when decoding, this is sufficient for the needs of the API's representation.
In addition to nesting messages, this method also handles some encapsulation of enum types to avoid possible conflicts due to ambiguities, and clears oneof indices as oneof isn't a concept that maps into BigQuery schemas.
To enable proto3 usage, this function will also rewrite proto3 descriptors into equivalent proto2 form. Such rewrites include setting the appropriate default values for proto3 fields.
func StorageSchemaToProto2Descriptor ¶ added in v1.21.0
func StorageSchemaToProto2Descriptor(inSchema *storagepb.TableSchema, scope string) (protoreflect.Descriptor, error)
StorageSchemaToProto2Descriptor builds a protoreflect.Descriptor for a given table schema using proto2 syntax.
func StorageSchemaToProto3Descriptor ¶ added in v1.21.0
func StorageSchemaToProto3Descriptor(inSchema *storagepb.TableSchema, scope string) (protoreflect.Descriptor, error)
StorageSchemaToProto3Descriptor builds a protoreflect.Descriptor for a given table schema using proto3 syntax.
NOTE: Currently the write API doesn't yet support proto3 behaviors (default value, wrapper types, etc), but this is provided for completeness.
func StorageSchemaToProtoDescriptorWithOptions ¶ added in v1.70.0
func StorageSchemaToProtoDescriptorWithOptions(inSchema *storagepb.TableSchema, scope string, opts ...ProtoConversionOption) (protoreflect.Descriptor, error)
StorageSchemaToProtoDescriptorWithOptions builds a protoreflect.Descriptor for a given table schema with extra configuration options. Uses proto2 syntax by default.
func StorageTableSchemaToBQSchema ¶
func StorageTableSchemaToBQSchema(in *storagepb.TableSchema) (bigquery.Schema, error)
StorageTableSchemaToBQSchema converts a TableSchema from the BigQuery Storage WriteClient into the equivalent BigQuery Schema.
Types ¶
type ProtoConversionOption ¶ added in v1.70.0
type ProtoConversionOption interface {
// contains filtered or unexported methods
}
ProtoConversionOption to customize proto descriptor conversion.
func WithProtoMapping ¶ added in v1.70.0
func WithProtoMapping(protoMapping ProtoMapping) ProtoConversionOption
WithProtoMapping allow to set an override on which field descriptor proto type is going to be used for the given BigQuery Table field type or field path. See https://cloud.google.com/bigquery/docs/supported-data-types#supported_protocol_buffer_data_types for accepted types by the BigQuery Storage Write API.
Examples:
// WithTimestampAsTimestamp defines that table fields of type Timestamp, are mapped
// as Google's WKT timestamppb.Timestamp.
func WithTimestampAsTimestamp() Option {
return WithProtoMapping(ProtoMapping{
FieldType: storagepb.TableFieldSchema_TIMESTAMP,
TypeName: "google.protobuf.Timestamp",
Type: descriptorpb.FieldDescriptorProto_TYPE_MESSAGE,
})
}
// WithIntervalAsDuration defines that table fields of type Interval, are mapped
// as Google's WKT durationpb.Duration
func WithIntervalAsDuration() Option {
return WithProtoMapping(ProtoMapping{
FieldType: storagepb.TableFieldSchema_INTERVAL,
TypeName: "google.protobuf.Duration",
Type: descriptorpb.FieldDescriptorProto_TYPE_MESSAGE,
})
}
type ProtoMapping ¶ added in v1.70.0
type ProtoMapping struct {
// FieldPath should be in the `fieldA.subFieldB.anotherSubFieldC`.
FieldPath string
// FieldType is the BigQuery Table field type to be overrided
FieldType storagepb.TableFieldSchema_Type
// TypeName is the full qualified path name for the protobuf type.
// Example: ".google.protobuf.Timestamp", ".google.protobuf.Duration", etc
TypeName string
// Type is the final DescriptorProto Type
Type descriptorpb.FieldDescriptorProto_Type
}
ProtoMapping can be used to override protobuf types used when converting from a BigQuery Schema to a Protobuf Descriptor. See WithProtoMapping option.