# Functions
New creates a new Service.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
NewService creates a new Service.
# Constants
See, edit, configure, and delete your Google Cloud data and see the email address for your Google Account.
# Structs
AcceleratorConfig: Specifies the type and number of accelerator cards attached to the instances of an instance group (see GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/)).
AutoscalingConfig: Autoscaling Policy config associated with the cluster.
AutoscalingPolicy: Describes an autoscaling policy for Dataproc cluster autoscaler.
BasicAutoscalingAlgorithm: Basic algorithm for autoscaling.
BasicYarnAutoscalingConfig: Basic autoscaling configurations for YARN.
Binding: Associates members with a role.
CancelJobRequest: A request to cancel a job.
Cluster: Describes the identifying information, config, and status of a cluster of Compute Engine instances.
ClusterConfig: The cluster config.
ClusterMetrics: Contains cluster daemon metrics, such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only.
ClusterOperation: The cluster operation triggered by a workflow.
ClusterOperationMetadata: Metadata describing the operation.
ClusterOperationStatus: The status of the operation.
ClusterSelector: A selector that chooses target cluster for jobs based on metadata.
ClusterStatus: The status of a cluster and its instances.
DiagnoseClusterRequest: A request to collect cluster diagnostic information.
DiagnoseClusterResults: The location of diagnostic output.
DiskConfig: Specifies the config of disk options for a group of VM instances.
Empty: A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs.
EncryptionConfig: Encryption settings for the cluster.
EndpointConfig: Endpoint config for this cluster.
Expr: Represents a textual expression in the Common Expression Language (CEL) syntax.
GceClusterConfig: Common config settings for resources of Compute Engine cluster instances, applicable to all instances in the cluster.
GetIamPolicyRequest: Request message for GetIamPolicy method.
GetPolicyOptions: Encapsulates settings provided to GetIamPolicy.
GkeClusterConfig: The GKE config for this cluster.
HadoopJob: A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
HiveJob: A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN.
InjectCredentialsRequest: A request to inject credentials into a cluster.
InstanceGroupAutoscalingPolicyConfig: Configuration for the size bounds of an instance group, including its proportional size to other groups.
InstanceGroupConfig: The config settings for Compute Engine resources in an instance group, such as a master or worker group.
InstanceReference: A reference to a Compute Engine instance.
InstantiateWorkflowTemplateRequest: A request to instantiate a workflow template.
Job: A Dataproc job resource.
JobMetadata: Job Operation metadata.
JobPlacement: Dataproc job config.
JobReference: Encapsulates the full scoping used to reference a job.
JobScheduling: Job scheduling options.
JobStatus: Dataproc job status.
KerberosConfig: Specifies Kerberos related configuration.
LifecycleConfig: Specifies the cluster auto-delete schedule configuration.
ListAutoscalingPoliciesResponse: A response to a request to list autoscaling policies in a project.
ListClustersResponse: The list of all clusters in a project.
ListJobsResponse: A list of jobs in a project.
ListOperationsResponse: The response message for Operations.ListOperations.
ListWorkflowTemplatesResponse: A response to a request to list workflow templates in a project.
LoggingConfig: The runtime logging config of the job.
ManagedCluster: Cluster that is managed by the workflow.
ManagedGroupConfig: Specifies the resources used to actively manage an instance group.
MetastoreConfig: Specifies a Metastore configuration.
NamespacedGkeDeploymentTarget: A full, namespace-isolated deployment target for an existing GKE cluster.
NodeGroupAffinity: Node Group Affinity for clusters using sole-tenant node groups.
NodeInitializationAction: Specifies an executable to run on a fully configured node and a timeout period for executable completion.
Operation: This resource represents a long-running operation that is the result of a network API call.
OrderedJob: A job executed by the workflow.
ParameterValidation: Configuration for parameter validation.
PigJob: A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN.
Policy: An Identity and Access Management (IAM) policy, which specifies access controls for Google Cloud resources.A Policy is a collection of bindings.
PrestoJob: A Dataproc job for running Presto (https://prestosql.io/) queries.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
PySparkJob: A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/python-programming-guide.html) applications on YARN.
QueryList: A list of queries to run on a cluster.
RegexValidation: Validation based on regular expressions.
ReservationAffinity: Reservation Affinity for consuming Zonal reservation.
SecurityConfig: Security related configuration, including encryption, Kerberos, etc.
No description provided by the author
SetIamPolicyRequest: Request message for SetIamPolicy method.
ShieldedInstanceConfig: Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
SoftwareConfig: Specifies the selection and config of software inside the cluster.
SparkJob: A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN.
SparkRJob: A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/sparkr.html) applications on YARN.
SparkSqlJob: A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries.
StartClusterRequest: A request to start a cluster.
Status: The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs.
StopClusterRequest: A request to stop a cluster.
SubmitJobRequest: A request to submit a job.
TemplateParameter: A configurable parameter that replaces one or more fields in the template.
TestIamPermissionsRequest: Request message for TestIamPermissions method.
TestIamPermissionsResponse: Response message for TestIamPermissions method.
ValueValidation: Validation based on a list of allowed values.
WorkflowGraph: The workflow graph.
WorkflowMetadata: A Dataproc workflow template resource.
WorkflowNode: The workflow node.
WorkflowTemplate: A Dataproc workflow template resource.
WorkflowTemplatePlacement: Specifies workflow execution target.Either managed_cluster or cluster_selector is required.
YarnApplication: A YARN application created by a job.