makeporngreatagain.pro
yeahporn.top
hd xxx

Practice Test 4 | Google Cloud Certified Professional Data Engineer | Dumps | Mock Test

4,911

Your company is migrating their 40 nodes Apache Hadoop cluster to the cloud. Company has few Spark and Pig jobs that they have already created and want to re-run the same on Google Cloud. Also, the company wants to minimize the management of cluster as much as possible. The data needs to be persisted beyond the life of the cluster. What should you do?

A. Create Cloud Dataflow job to process the data.
B. Create Hadoop cluster on Google Compute engine and use Local SSD disk to persist the data.
C. Create Cloud Dataproc cluster and use persistent disk for HDFS.
D. Create Cloud Dataproc cluster and use Google Cloud Storage connector.

Correct answer is D.

Option A is incorrect. Dataflow job is not suited for Hadoop jobs.

Option B is incorrect. Creating Hadoop cluster on compute engine would increase infrastructure management cost. Also, persistent disks would not provide scalability.

Option C is incorrect. As Dataproc cluster is associated with persistent disk for HDFS, if the cluster is terminated the data would be lost.

Option D is Correct. As the requirement is to reuse Spark and Hive jobs with minimizing the infrastructure management with the ability to store data in a durable external storage, Dataproc with cloud storage would be an ideal solution.

Comments are closed, but trackbacks and pingbacks are open.

baseofporn.com https://www.opoptube.com
Ads Blocker Image Powered by Code Help Pro

Ads Blocker Detected!!!

We have detected that you are using extensions to block ads. Please support us by disabling these ads blocker.