makeporngreatagain.pro
yeahporn.top
hd xxx

Practice Test 1 | Google Cloud Certified Professional Cloud Architect | Dumps | Mock Test

4,863

For this question, refer to the TerramEarth case study.

TerramEarth’s 2 million vehicles are scattered around the world. Based on the vehicle’s location its telemetry data is stored in a Google Cloud Storage (GCS) regional bucket (US. Europe, or Asia). The CTO has asked you to run a report on the raw telemetry data to determine why vehicles are breaking down after 100 K miles. You want to run this job on all the data. What is the most cost-effective way to run this job?

A. Launch a cluster in each region to preprocess and compress the raw data, then move the data into a regional bucket and use a Cloud Dataproc cluster to finish the job.
B. Move all the data into 1 region, then launch a Google Cloud Dataproc cluster to run the job.
C. Launch a cluster in each region to preprocess and compress the raw data, then move the data into a multi-region bucket and use a Dataproc cluster to finish the job.
D. Move all the data into 1 zone, then launch a Cloud Dataproc cluster to run the job.

Correct Answer – A

A (Correct answer) – Launch a cluster in each region to preprocess and compress the raw data, then move the data into a regional bucket and use a Cloud Dataproc cluster to finish the job.

Since the raw data are saved based on the vehicle’s location all over the world, most likely they’ll scatter in many different regions, and eventually they need to move to a centralized location for final processing.

Preprocessing raw data and compressing them from each location to reduce the size so to save the between-region data egress cost.

Dataproc is a region-specific resource and since you want to run this job on all data and you or your group probably are the only consumers for the data, moving the data into a regional bucket same or closest to the DataProc cluster’s region for final analysis is most cost-effective.

Use a regional location to help optimize latency, availability, and network bandwidth for data consumers grouped in the same region.

Use a multi-regional location when you want to serve content to data consumers that are outside of the Google network and distributed across large geographic areas.

·         Store frequently accessed data, or data that needs to be geo-redundant as Multi-Regional Storage.

B – Move all the data into 1 region, then launch a Google Cloud Dataproc cluster to run the job.

Since the raw data are save based on the vehicles’ location all over the world, moving them to a centralized region without preprocessing and compressing would incur additional between-region data egress cost

C – Launch a cluster in each region to preprocess and compress the raw data, then move the data into a multi-region bucket and use a Dataproc cluster to finish the job.

Dataproc is Region-specific resource and since you want to run this job on all data and data consumption occurs in a centralized location, then moving the data into a multi-region bucket for Dataproc cluster jobs is not most cost-effective.

Use a multi-regional location when you want to serve content to data consumers that are outside of the Google network and distributed across large geographic areas.

·         Store frequently accessed data or data that needs to be geo-redundant as Multi-Regional Storage.

D – Move all the data into 1 zone, then launch a Cloud Dataproc cluster to run the job.

GCS is either Regional or Multi-Regional not Zonal Resources

Comments are closed, but trackbacks and pingbacks are open.

baseofporn.com https://www.opoptube.com
Ads Blocker Image Powered by Code Help Pro

Ads Blocker Detected!!!

We have detected that you are using extensions to block ads. Please support us by disabling these ads blocker.