Professional Cloud Architect Exam QuestionsBrowse all questions from this exam

Professional Cloud Architect Exam - Question 160


The application reliability team at your company this added a debug feature to their backend service to send all server events to Google Cloud Storage for eventual analysis. The event records are at least 50 KB and at most 15 MB and are expected to peak at 3,000 events per second. You want to minimize data loss.

Which process should you implement?

Show Answer
Correct Answer: CD

To ensure minimal data loss and efficient storage, the process should involve compressing individual files and naming them with a predictable pattern like serverName and EventSequence. Saving files to one bucket ensures centralization and ease of management, while setting custom metadata headers for each object after saving helps in efficient retrieval and management. This approach balances performance, scalability, and data integrity.

Discussion

17 comments
Sign in to comment
rishab86Option: D
Jun 4, 2021

answer is definitely D https://cloud.google.com/storage/docs/request-rate#naming-convention "A longer randomized prefix provides more effective auto-scaling when ramping to very high read and write rates. For example, a 1-character prefix using a random hex value provides effective auto-scaling from the initial 5000/1000 reads/writes per second up to roughly 80000/16000 reads/writes per second, because the prefix has 16 potential values. If your use case does not need higher rates than this, a 1-character randomized prefix is just as effective at ramping up request rates as a 2-character or longer randomized prefix." Example: my-bucket/2fa764-2016-05-10-12-00-00/file1 my-bucket/5ca42c-2016-05-10-12-00-00/file2 my-bucket/6e9b84-2016-05-10-12-00-01/file3

kopper2019
Jun 30, 2021

- New Q, 06/2021 Helicopter Racing League Testlet 1 Company overview QUESTION 6 For this question, refer to the Helicopter Racing League (HRL) case study. A recent finance audit of cloud infrastructure noted an exceptionally high number of Compute Engine instances are allocated to do video encoding and transcoding. You suspect that these Virtual Machines are zombie machines that were not deleted after their workloads completed. You need to quickly get a list of which VM instances are idle. What should you do? A. Log into each Compute Engine instance and collect disk, CPU, memory, and network usage statistics for analysis. B. Use the gcloud compute instances list to list the virtual machine instances that have the idle: true label set. C. Use the gcloud recommender command to list the idle virtual machine instances. D. From the Google Console, identify which Compute Engine instances in the managed instance groups are no longer responding to health check probes.

juccjucc
Jul 1, 2021

is it C?

cloudstd
Jul 2, 2021

this is not 100% accurate. you should investigate if you doubt if is incorrect https://cloud.google.com/compute/docs/instances/viewing-and-applying-idle-vm-recommendations

Papafel
Jul 15, 2021

The correct answer is A

matmuh
Nov 17, 2021

Absulatly C

squishy_fishy
Dec 14, 2023

The correct answer is C based on the URL you shared. gcloud recommender recommendations list \ --project=PROJECT_ID \ --location=ZONE \ --recommender=google.compute.instance.IdleResourceRecommender \ --format=yaml

cloudstd
Jul 1, 2021

answer: C

KS1911
Jul 16, 2021

I have my exam scheduled after 3 days. Would there be more questions coming on ExamTopics?

kravenn
Aug 6, 2021

answer C

joe2211Option: D
Nov 27, 2021

vote D

amxexam
Sep 12, 2021

Request admin to intervene and delete the hijacking of the question by kopper2019

Examster1
Sep 16, 2021

Use the material for study dude! Hello? Anyone home?

Arad
Nov 26, 2021

it looks like this website does not have any admin

kopper2019
Jun 30, 2021

- New Q, 06/2021 Helicopter Racing League Testlet 1 Company overview QUESTION 5 For this question, refer to the Helicopter Racing League (HRL) case study. HRL is looking for a cost- effective approach for storing their race data such as telemetry. They want to keep all historical records, train models using only the previous season's data, and plan for data growth in terms of volume and information collected. You need to propose a data solution. Considering HRL business requirements and the goals expressed by CEO S. Hawke, what should you do? A. Use Firestore for its scalable and flexible document-based database. Use collections to aggregate race data by season and event. B. Use Cloud Spanner for its scalability and ability to version schemas with zero downtime. Split race data using season as a primary key. C. Use BigQuery for its scalability and ability to add columns to a schema. Partition race data based on season. D. Use Cloud SQL for its ability to automatically manage storage increases and compatibility with MySQL. Use separate database instances for each season.

juccjucc
Jul 1, 2021

is it C? all these questions are from the new exam? why they are here in the comments and not as questions in the list?

kopper2019
Jul 3, 2021

because exam was not updated so I added the Qs but they added this new Qs as normal now we have 218 Qs

Roncy
Oct 1, 2021

Hey Kopper, when would you provide the new set of questions ?

cloudstd
Jul 1, 2021

answer: C

Papafel
Jul 15, 2021

Yes answer is C

kravenn
Aug 6, 2021

answer: C

kopper2019
Jun 30, 2021

- New Q, 06/2021 Helicopter Racing League Testlet 1 Company overview QUESTION 4 For this question, refer to the Helicopter Racing League (HRL) case study. HRL wants better prediction accuracy from their ML prediction models. They want you to use Google’s AI Platform so HRL can understand and interpret the predictions. What should you do? A. Use Explainable AI. B. Use Vision AI. C. Use Google Cloud’s operations suite. D. Use Jupyter Notebooks.

juccjucc
Jul 1, 2021

is it A?

Papafel
Jul 15, 2021

Yes answer is A

cloudstd
Jul 1, 2021

answer: A

kravenn
Aug 6, 2021

answer A

Sephethus
Jun 20, 2024

what does this have to do with the cloud storage question?

ptsironisOption: B
May 28, 2023

Why not option B??

kopper2019
Jun 30, 2021

- New Q, 06/2021 Helicopter Racing League Testlet 1 Company overview QUESTION 3 For this question, refer to the Helicopter Racing League (HRL) case study. The HRL development team releases a new version of their predictive capability application every Tuesday evening at 3 a.m. UTC to a repository. The security team at HRL has developed an in-house penetration test Cloud Function called Airwolf. The security team wants to run Airwolf against the predictive capability application as soon as it is released every Tuesday. You need to set up Airwolf to run at the recurring weekly cadence. What should you do? A. Set up Cloud Tasks and a Cloud Storage bucket that triggers a Cloud Function. B. Set up a Cloud Logging sink and a Cloud Storage bucket that triggers a Cloud Function. C. Configure the deployment job to notify a Pub/Sub queue that triggers a Cloud Function. D. Set up Identity and Access Management (IAM) and Confidential Computing to trigger a Cloud Function.

esc
Jul 4, 2021

answer : A

Papafel
Jul 15, 2021

Answer is A

jask
Sep 22, 2021

in option A what is the use of Cloud storage bucket? In my opinion answer is C.

vchrist
Nov 30, 2021

why A? Does Cloud Storage make sense ?

Amrit123
Oct 21, 2021

C, is the right answer. The scheduler would run without a trigger even though the release has not been done. If you read (application as soon as it is released ), the time is not certain. So, the answer is C. Check out the last 30 questions, would give a better idea as there is a separate discussion

cloudmon
Apr 6, 2022

I would go with C https://cloud.google.com/source-repositories/docs/code-change-notification

BiddlyBdoyng
Jun 12, 2023

It's probably C due to pub sub on Cloud Deploy rather than source repos https://cloud.google.com/deploy/docs/subscribe-deploy-notifications

vincy2202Option: D
Dec 13, 2021

D is the correct answer

Pime13Option: D
Jan 23, 2022

D: https://cloud.google.com/storage/docs/request-rate#naming-convention

Mahmoud_EOption: D
Oct 20, 2022

D is the correct answer https://cloud.google.com/storage/docs/request-rate#naming-convention

squishy_fishyOption: A
Dec 14, 2023

The question is how to reduce the data loss, the answer should be something like separation of duty, data lost prevention, but answer D is for reducing latency retrieving data. I'm baffled by this question.

kopper2019
Jun 30, 2021

- New Q, 06/2021 Helicopter Racing League Testlet 1 Company overview Helicopter Racing League (HRL) is a global sports league for competitive helicopter racing. Each year HRL holds the world championship and several regional league competitions where teams compete to earn a spot in the world championship. HRL offers a paid service to stream the races all over the world with live telemetry and predictions throughout each race. Solution concept HRL wants to migrate their existing service to a new platform to expand their use of managed AI and ML services to facilitate race predictions. Additionally, as new fans engage with the sport, particularly in emerging regions, they want to move the serving of their content, both real-time and recorded, closer to their users.

kopper2019
Jun 30, 2021

Existing technical environment HRL is a public cloud-first company; the core of their mission-critical applications runs on their current public cloud provider. Video recording and editing is performed at the race tracks, and the content is encoded and transcoded, where needed, in the cloud. Enterprise-grade connectivity and local compute is provided by truck-mounted mobile data centers. Their race prediction services are hosted exclusively on their existing public cloud provider. Their existing technical environment is as follows: - Existing content is stored in an object storage service on their existing public cloud provider. Video encoding and transcoding is performed on VMs created for each job. Race predictions are performed using TensorFlow running on VMs in the current public cloud provider.

kopper2019
Jun 30, 2021

Business requirements HRL’s owners want to expand their predictive capabilities and reduce latency for their viewers in emerging markets. Their requirements are: Support ability to expose the predictive models to partners. Increase predictive capabilities during and before races: ○ Race results ○ Mechanical failures ○ Crowd sentiment Increase telemetry and create additional insights. Measure fan engagement with new predictions. Enhance global availability and quality of the broadcasts. Increase the number of concurrent viewers. Minimize operational complexity. Ensure compliance with regulations. Create a merchandising revenue stream. Technical requirements Maintain or increase prediction throughput and accuracy. Reduce viewer latency. Increase transcoding performance. Create real-time analytics of viewer consumption patterns and engagement. Create a data mart to enable processing of large volumes of race data.

kopper2019
Jun 30, 2021

Executive statement Our CEO, S. Hawke, wants to bring high-adrenaline racing to fans all around the world. We listen to our fans, and they want enhanced video streams that include predictions of events within the race (e.g., overtaking). Our current platform allows us to predict race outcomes but lacks the facility to support real- time predictions during races and the capacity to process season-long results.

kopper2019
Jun 30, 2021

QUESTION 1 For this question, refer to the Helicopter Racing League (HRL) case study. Your team is in charge of creating a payment card data vault for card numbers used to bill tens of thousands of viewers, merchandise consumers, and season ticket holders. You need to implement a custom card tokenization service that meets the following requirements: • It must provide low latency at minimal cost. • It must be able to identify duplicate credit cards and must not store plaintext card numbers. • It should support annual key rotation. Which storage approach should you adopt for your tokenization service? A. Store the card data in Secret Manager after running a query to identify duplicates. B. Encrypt the card data with a deterministic algorithm stored in Firestore using Datastore mode. C. Encrypt the card data with a deterministic algorithm and shard it across multiple Memorystore instances. D. Use column-level encryption to store the data in Cloud SQL.

SPNBLUE
Aug 3, 2021

Why D ?

kopper2019
Jun 30, 2021

- New Q, 06/2021 Helicopter Racing League Testlet 1 Company overview QUESTION 2 For this question, refer to the Helicopter Racing League (HRL) case study. Recently HRL started a new regional racing league in Cape Town, South Africa. In an effort to give customers in Cape Town a better user experience, HRL has partnered with the Content Delivery Network provider, Fastly. HRL needs to allow traffic coming from all of the Fastly IP address ranges into their Virtual Private Cloud network (VPC network). You are a member of the HRL security team and you need to configure the update that will allow only the Fastly IP address ranges through the External HTTP(S) load balancer. Which command should you use?

kopper2019
Jun 30, 2021

A. gcloud compute security-policies rules update 1000 \ --security-policy from-fastly \ --src-ip-ranges * \ --action “allow” B. gcloud compute firewall rules update sourceiplist-fastly \ --priority 100 \ --allow tcp:443 C. gcloud compute firewall rules update hir-policy \ --priority 100 \ --target-tags=sourceiplist-fastly \ --allow tcp:443 D. gcloud compute security-policies rules update 1000 \ --security-policy hir-policy \ --expression “evaluatePreconfiguredExpr(‘sourceiplist-fastly’)” \ --action “allow”

cloudstd
Jul 1, 2021

answer: D

Papafel
Jul 15, 2021

Answer is A

matmuh
Nov 18, 2021

A is incorrect : To match all IPs specify * https://cloud.google.com/sdk/gcloud/reference/compute/security-policies/rules/update

kravenn
Aug 6, 2021

answer D

xavi1
Aug 16, 2021

both A and D have correct syntax, but src-ip-ranges cannot be "*", correct is D

cloudmon
Apr 6, 2022

I agree

nunopires2001Option: D
Jan 26, 2023

I was thinking correct answer was A, because we should have some kind of bucket rotation in order to avoid hiting the max size of a bucket. However it seems there is no size limit for a GCP cloud bucket, so I will have to agree with community and stick to answer D.

marcoholOption: D
Oct 7, 2023

I agree with D, but then, using a random prefix wouldn't it make more difficult the file retrieve?

Sephethus
Jun 20, 2024

This question is messed up. The formatting, the discussion, everything. I have no idea what to choose here. Chat GPT thinks the answer is C but most think it is D and there's not much difference between the two answers.