Professional Data Engineer on Google Cloud Platform v1.0 (Professional Data Engineer)

Page:    1 / 21   
Total 311 questions

Your analytics team wants to build a simple statistical model to determine which customers are most likely to work with your company again, based on a few different metrics. They want to run the model on Apache Spark, using data housed in Google Cloud Storage, and you have recommended using Google Cloud
Dataproc to execute this job. Testing has shown that this workload can run in approximately 30 minutes on a 15-node cluster, outputting the results into Google
BigQuery. The plan is to run this workload weekly. How should you optimize the cluster for cost?

  • A. Migrate the workload to Google Cloud Dataflow
  • B. Use pre-emptible virtual machines (VMs) for the cluster
  • C. Use a higher-memory node so that the job runs faster
  • D. Use SSDs on the worker nodes so that the job can run faster


Answer : A

Your company receives both batch- and stream-based event data. You want to process the data using Google Cloud Dataflow over a predictable time period.
However, you realize that in some instances data can arrive late or out of order. How should you design your Cloud Dataflow pipeline to handle data that is late or out of order?

  • A. Set a single global window to capture all the data.
  • B. Set sliding windows to capture all the lagged data.
  • C. Use watermarks and timestamps to capture the lagged data.
  • D. Ensure every datasource type (stream or batch) has a timestamp, and use the timestamps to define the logic for lagged data.


Answer : B

You have some data, which is shown in the graphic below. The two dimensions are X and Y, and the shade of each dot represents what class it is. You want to classify this data accurately using a linear algorithm. To do this you need to add a synthetic feature. What should the value of that feature be?

  • A. X2+Y2
  • B. X2
  • C. Y2
  • D. cos(X)


Answer : D

You are integrating one of your internal IT applications and Google BigQuery, so users can query BigQuery from the application's interface. You do not want individual users to authenticate to BigQuery and you do not want to give them access to the dataset. You need to securely access BigQuery from your IT application. What should you do?

  • A. Create groups for your users and give those groups access to the dataset
  • B. Integrate with a single sign-on (SSO) platform, and pass each userג€™s credentials along with the query request
  • C. Create a service account and grant dataset access to that account. Use the service accountג€™s private key to access the dataset
  • D. Create a dummy user and grant dataset access to that user. Store the username and password for that user in a file on the files system, and use those credentials to access the BigQuery dataset


Answer : C

You are building a data pipeline on Google Cloud. You need to prepare data using a casual method for a machine-learning process. You want to support a logistic regression model. You also need to monitor and adjust for null values, which must remain real-valued and cannot be removed. What should you do?

  • A. Use Cloud Dataprep to find null values in sample source data. Convert all nulls to ג€˜noneג€™ using a Cloud Dataproc job.
  • B. Use Cloud Dataprep to find null values in sample source data. Convert all nulls to 0 using a Cloud Dataprep job.
  • C. Use Cloud Dataflow to find null values in sample source data. Convert all nulls to ג€˜noneג€™ using a Cloud Dataprep job.
  • D. Use Cloud Dataflow to find null values in sample source data. Convert all nulls to 0 using a custom script.


Answer : C

You set up a streaming data insert into a Redis cluster via a Kafka cluster. Both clusters are running on Compute Engine instances. You need to encrypt data at rest with encryption keys that you can create, rotate, and destroy as needed. What should you do?

  • A. Create a dedicated service account, and use encryption at rest to reference your data stored in your Compute Engine cluster instances as part of your API service calls.
  • B. Create encryption keys in Cloud Key Management Service. Use those keys to encrypt your data in all of the Compute Engine cluster instances.
  • C. Create encryption keys locally. Upload your encryption keys to Cloud Key Management Service. Use those keys to encrypt your data in all of the Compute Engine cluster instances.
  • D. Create encryption keys in Cloud Key Management Service. Reference those keys in your API service calls when accessing the data in your Compute Engine cluster instances.


Answer : C

You are developing an application that uses a recommendation engine on Google Cloud. Your solution should display new videos to customers based on past views. Your solution needs to generate labels for the entities in videos that the customer has viewed. Your design must be able to provide very fast filtering suggestions based on data from other customer preferences on several TB of data. What should you do?

  • A. Build and train a complex classification model with Spark MLlib to generate labels and filter the results. Deploy the models using Cloud Dataproc. Call the model from your application.
  • B. Build and train a classification model with Spark MLlib to generate labels. Build and train a second classification model with Spark MLlib to filter results to match customer preferences. Deploy the models using Cloud Dataproc. Call the models from your application.
  • C. Build an application that calls the Cloud Video Intelligence API to generate labels. Store data in Cloud Bigtable, and filter the predicted labels to match the userג€™s viewing history to generate preferences.
  • D. Build an application that calls the Cloud Video Intelligence API to generate labels. Store data in Cloud SQL, and join and filter the predicted labels to match the userג€™s viewing history to generate preferences.


Answer : C

You are selecting services to write and transform JSON messages from Cloud Pub/Sub to BigQuery for a data pipeline on Google Cloud. You want to minimize service costs. You also want to monitor and accommodate input data volume that will vary in size with minimal manual intervention. What should you do?

  • A. Use Cloud Dataproc to run your transformations. Monitor CPU utilization for the cluster. Resize the number of worker nodes in your cluster via the command line.
  • B. Use Cloud Dataproc to run your transformations. Use the diagnose command to generate an operational output archive. Locate the bottleneck and adjust cluster resources.
  • C. Use Cloud Dataflow to run your transformations. Monitor the job system lag with Stackdriver. Use the default autoscaling setting for worker instances.
  • D. Use Cloud Dataflow to run your transformations. Monitor the total execution time for a sampling of jobs. Configure the job to use non-default Compute Engine machine types when needed.


Answer : B

Your infrastructure includes a set of YouTube channels. You have been tasked with creating a process for sending the YouTube channel data to Google Cloud for analysis. You want to design a solution that allows your world-wide marketing teams to perform ANSI SQL and other types of analysis on up-to-date YouTube channels log data. How should you set up the log data transfer into Google Cloud?

  • A. Use Storage Transfer Service to transfer the offsite backup files to a Cloud Storage Multi-Regional storage bucket as a final destination.
  • B. Use Storage Transfer Service to transfer the offsite backup files to a Cloud Storage Regional bucket as a final destination.
  • C. Use BigQuery Data Transfer Service to transfer the offsite backup files to a Cloud Storage Multi-Regional storage bucket as a final destination.
  • D. Use BigQuery Data Transfer Service to transfer the offsite backup files to a Cloud Storage Regional storage bucket as a final destination.


Answer : B

You are designing storage for very large text files for a data pipeline on Google Cloud. You want to support ANSI SQL queries. You also want to support compression and parallel load from the input locations using Google recommended practices. What should you do?

  • A. Transform text files to compressed Avro using Cloud Dataflow. Use BigQuery for storage and query.
  • B. Transform text files to compressed Avro using Cloud Dataflow. Use Cloud Storage and BigQuery permanent linked tables for query.
  • C. Compress text files to gzip using the Grid Computing Tools. Use BigQuery for storage and query.
  • D. Compress text files to gzip using the Grid Computing Tools. Use Cloud Storage, and then import into Cloud Bigtable for query.


Answer : D

You are developing an application on Google Cloud that will automatically generate subject labels for users' blog posts. You are under competitive pressure to add this feature quickly, and you have no additional developer resources. No one on your team has experience with machine learning. What should you do?

  • A. Call the Cloud Natural Language API from your application. Process the generated Entity Analysis as labels.
  • B. Call the Cloud Natural Language API from your application. Process the generated Sentiment Analysis as labels.
  • C. Build and train a text classification model using TensorFlow. Deploy the model using Cloud Machine Learning Engine. Call the model from your application and process the results as labels.
  • D. Build and train a text classification model using TensorFlow. Deploy the model using a Kubernetes Engine cluster. Call the model from your application and process the results as labels.


Answer : A

You are designing storage for 20 TB of text files as part of deploying a data pipeline on Google Cloud. Your input data is in CSV format. You want to minimize the cost of querying aggregate values for multiple users who will query the data in Cloud Storage with multiple engines. Which storage service and schema design should you use?

  • A. Use Cloud Bigtable for storage. Install the HBase shell on a Compute Engine instance to query the Cloud Bigtable data.
  • B. Use Cloud Bigtable for storage. Link as permanent tables in BigQuery for query.
  • C. Use Cloud Storage for storage. Link as permanent tables in BigQuery for query.
  • D. Use Cloud Storage for storage. Link as temporary tables in BigQuery for query.


Answer : A

You are designing storage for two relational tables that are part of a 10-TB database on Google Cloud. You want to support transactions that scale horizontally.
You also want to optimize data for range queries on non-key columns. What should you do?

  • A. Use Cloud SQL for storage. Add secondary indexes to support query patterns.
  • B. Use Cloud SQL for storage. Use Cloud Dataflow to transform data to support query patterns.
  • C. Use Cloud Spanner for storage. Add secondary indexes to support query patterns.
  • D. Use Cloud Spanner for storage. Use Cloud Dataflow to transform data to support query patterns.


Answer : C

Your financial services company is moving to cloud technology and wants to store 50 TB of financial time-series data in the cloud. This data is updated frequently and new data will be streaming in all the time. Your company also wants to move their existing Apache Hadoop jobs to the cloud to get insights into this data.
Which product should they use to store the data?

  • A. Cloud Bigtable
  • B. Google BigQuery
  • C. Google Cloud Storage
  • D. Google Cloud Datastore


Answer : A

Reference:
https://cloud.google.com/bigtable/docs/schema-design-time-series

An organization maintains a Google BigQuery dataset that contains tables with user-level data. They want to expose aggregates of this data to other Google
Cloud projects, while still controlling access to the user-level data. Additionally, they need to minimize their overall storage cost and ensure the analysis cost for other projects is assigned to those projects. What should they do?

  • A. Create and share an authorized view that provides the aggregate results.
  • B. Create and share a new dataset and view that provides the aggregate results.
  • C. Create and share a new dataset and table that contains the aggregate results.
  • D. Create dataViewer Identity and Access Management (IAM) roles on the dataset to enable sharing.


Answer : A

Reference:
https://cloud.google.com/bigquery/docs/share-access-views

Page:    1 / 21   
Total 311 questions