makeporngreatagain.pro
yeahporn.top
hd xxx

Practice Test 2 | Google Cloud Certified Professional Data Engineer | Dumps | Mock Test

5,243

You are building a streaming data pipeline for a VOD (Video-on-demand) service company. It receives event data from its player app sending details of what users are watching, video state (play, pause, loading), and other metrics that can be derived from the device used such as OS, brand, and screen resolution.

The event data collected should be analyzed by most recent data for quality check and further action in case of streaming issues. How can you ingest the stream data?

A. Use Cloud Pub/Sub to ingest the events and attach a unique ID to every event in the publisher.
B. Use Cloud Pub/Sub to ingest the events and attach timestamp to every event in the publisher.
C. Use Cloud Pub/Sub to ingest the events and store them to BigTable without any enrichment. Pub/Sub publisher automatically adds timestamp to messages before publishing to subscribers.
D. Launch a compute engine and install Apache Kafka to ingest the event stream.

Answer: C.

Here the requirement is “The event data collected should be analyzed by most recent data for quality check and further action in case of streaming issues.”
To accomplish this, we need to store the ingested data in BigTable/BigQuery like data storage and do analysis on the basis of ingested data and its time.
As we know, The Pub/Sub service adds the following fields to the message:

  • A message ID unique to the topic
  • A timestamp for when the Pub/Sub service receives the message

Option A: Use Cloud Pub/Sub to ingest the events and attach a unique ID to every event in the publisher.
    MessageId will serve as Unique Id but where to store data to do analysis is not clear. Hence this is not the correct solution
B: Use Cloud Pub/Sub to ingest the events and attach a timestamp to every event in the publisher.
    As we know, Pub/Sub adds timestamp is every message, storing the data to do analysis, is not mentioned. Hence it will not be going to serve our objectives.
C: Use Cloud Pub/Sub to ingest the events and store them to BigTable without any enrichment. Pub/Sub publisher automatically adds a timestamp to messages before publishing to subscribers.
    Storing the ingested data without enrichment will help us to do multiple analysis requirements. Pub/Sub contains a message Id that is unique and the Timestamp of each message will help our analysis to the extent of an individual message.
    Hence this is a possible answer.
D: Launch a compute engine and install Apache Kafka to ingest the event stream.
    Apache Kafka is used for stream processing, website activity tracking, metrics collection and monitoring, log aggregation, real-time analytics, CEP, ingesting data into Spark, ingesting data into Hadoop, CQRS, replay messages, error recovery, and guaranteed distributed commit log for in-memory computing.
    But it does not store data for future use. Additionally, we need to Compute engine which s IaaS is not good if have PaaS as BigTable Service.
    Hence this is also not a good solution

Ref. URL: https://cloud.google.com/pubsub/docs/reference/rest/v1/PubsubMessage

Comments are closed, but trackbacks and pingbacks are open.

baseofporn.com https://www.opoptube.com
Ads Blocker Image Powered by Code Help Pro

Ads Blocker Detected!!!

We have detected that you are using extensions to block ads. Please support us by disabling these ads blocker.