A structured message reporting an autoscaling decision made by the Dataflow
service.
Settings for WorkerPool autoscaling.
Metadata for a BigQuery connector used by the job.
Metadata for a Cloud Bigtable connector used by the job.
Request to check is active jobs exists for a project
Response for CheckActiveJobsRequest.
All configuration data for a particular Computation.
Container Spec.
A request to create a Cloud Dataflow job from a template.
Request to create a Cloud Dataflow job.
Identifies the location of a custom souce.
Data disk assignment for a given VM instance.
Metadata for a Datastore connector used by the job.
Describes any options that have an effect on the debugging of pipelines.
Request to delete a snapshot.
Response from deleting a snapshot.
Describes the data disk used by a workflow job.
Data provided with a pipeline or transform to provide descriptive info.
Params which should be passed when launching a dynamic template.
Describes the environment in which a Dataflow Job runs.
A message describing the state of a particular execution stage.
Description of the composing transforms, names/ids, and input/outputs of a
stage of execution. Some composing transforms and sources may have been
generated by the Dataflow service during execution planning.
Metadata for a File connector used by the job.
The environment values to be set at runtime for flex template.
Request to get job execution details.
Request to get job metrics.
Request to get the state of a Cloud Dataflow job.
Request to get information about a snapshot
Request to get information about a particular execution stage of a job.
Currently only tracked for Batch jobs.
A request to retrieve a Cloud Dataflow job template.
The response to a GetTemplate request.
Used in the error_details field of a google.rpc.Status message, this
indicates problems with the template parameter.
Defines a job to be run by the Cloud Dataflow service.
Information about the execution of a job.
Additional information about how a Cloud Dataflow job will be executed that
isn’t contained in the submitted job.
Contains information about how a particular
[google.dataflow.v1beta3.Step][google.dataflow.v1beta3.Step] will be executed.
A particular message pertaining to a Dataflow job.
Metadata available primarily for filtering jobs. Will be included in the
ListJob response and Job SUMMARY view.
JobMetrics contains a collection of metrics describing the detailed progress
of a Dataflow job. Metrics correspond to user-defined and system-defined
metrics in the job.
Data disk assignment information for a specific key-range of a sharded
computation.
Currently we only support UTF-8 character splits to simplify encoding into
JSON.
Location information for a specific key-range of a sharded computation.
Currently we only support UTF-8 character splits to simplify encoding into
JSON.
Launch FlexTemplate Parameter.
A request to launch a Cloud Dataflow job from a FlexTemplate.
Response to the request to launch a job from Flex Template.
Parameters to provide to the template being launched.
A request to launch a template.
Response to the request to launch a template.
Request to list job messages.
Up to max_results messages will be returned in the time range specified
starting with the oldest messages first. If no time range is specified
the results with start with the oldest message.
Response to a request to list job messages.
Request to list Cloud Dataflow jobs.
Response to a request to list Cloud Dataflow jobs in a project. This might
be a partial response, depending on the page size in the ListJobsRequest.
However, if the project does not have any jobs, an instance of
ListJobsResponse is not returned and the requests’s response
body is empty {}.
Request to list snapshots.
List of snapshots.
Identifies a metric, by describing the source which generated the
metric.
Describes the state of a metric.
Describes mounted data disk.
The packages that must be installed in order for a worker to run the
steps of the Cloud Dataflow job that will be assigned to its worker
pool.
Metadata for a specific parameter.
A descriptive representation of submitted pipeline as well as the executed
form. This data is provided by the Dataflow service for ease of visualizing
the pipeline and interpreting Dataflow provided metrics.
Information about the progress of some component of job execution.
Metadata for a Pub/Sub connector used by the job.
Identifies a pubsub location to use for transferring data into or
out of a streaming Dataflow job.
Represents a Pubsub snapshot.
The environment values to set at runtime.
RuntimeMetadata describing a runtime environment.
Defines a SDK harness container for executing Dataflow pipelines.
SDK Information.
The version of the SDK used to run the job.
Represents a snapshot of a job.
Request to create a snapshot of a job.
Metadata for a Spanner connector used by the job.
Information about the workers and work items within a stage.
Information about a particular execution stage of a job.
State family configuration.
Defines a particular step within a Cloud Dataflow job.
Describes a stream of data, either as input to be processed or as
output of a streaming Dataflow job.
Streaming appliance snapshot configuration.
Describes full or partial data disk assignment information of the computation
ranges.
Identifies the location of a streaming side input.
Identifies the location of a streaming computation stage, for
stage-to-stage communication.
A rich message format, including a human readable string, a key for
identifying the message, and structured data associated with the message for
programmatic consumption.
Taskrunner configuration settings.
Metadata describing a template.
Global topology of the streaming Dataflow job, including all
computations and their sharded locations.
Description of the type, names/ids, and input/outputs for a transform.
Request to update a Cloud Dataflow job.
Information about an individual work item execution.
Information about a worker
Describes one particular pool of Cloud Dataflow workers to be
instantiated by the Cloud Dataflow service in order to perform the
computations required by a job. Note that a workflow job may use
multiple pools, in order to match the various computational
requirements of the various stages of the job.
Provides data to pass through to the worker harness.