Professional Cloud Architect Exam QuestionsBrowse all questions from this exam

Professional Cloud Architect Exam - Question 128


You are implementing the infrastructure for a web service on Google Cloud. The web service needs to receive and store the data from 500,000 requests per second. The data will be queried later in real time, based on exact matches of a known set of attributes. There will be periods where the web service will not receive any requests. The business wants to keep costs low. Which web service platform and database should you use for the application?

Show Answer
Correct Answer: BD

Cloud Run can scale to zero during periods of inactivity, which helps in keeping costs low. Additionally, Cloud Bigtable is well-suited for managing high-throughput and real-time queries, making it suitable for handling 500,000 requests per second. Therefore, using Cloud Run for the web service and Cloud Bigtable for the database ensures both cost efficiency and performance under heavy load.

Discussion

17 comments
Sign in to comment
EnzianOption: B
Jul 1, 2021

Any correct answer must involve Cloud Bigtable over BigQuery since Bigtable is optimized for heavy write loads. That leaves B and D. I would suggest B b/c it is lower cost ("The business wants to keep costs low")

pakilodi
Dec 15, 2021

Not only: occasionally there will be no requests. so Cloud Run will scale to zero

Petya27
May 30, 2023

Plus, we are talking about a predefined set of queries. For any predefined list of (simple) queries, we use Bigtable, and for any (complex) queries that we do not know ahead of time, we use BigQuery.

zanfo
Mar 14, 2022

the correct is B

AmitRBS
May 27, 2022

B. Agree. Additionally data need to store now so use Bigtable as question is not for analysing or data Analytics etc

kshlgpt
Jan 2, 2024

But cloud run can't support 50,000 request per second. Even cloud run 2nd gen supports 1000 requests per second. B is eliminated.

pancakes22
Jan 25, 2024

That's incorrect. https://cloud.google.com/run/quotas

MamthaSJOption: B
Jul 7, 2021

B is correct answer.

convers39Option: B
Jan 11, 2024

50,000 rps At first I thought Cloud Run could not handle this request rate and then chose D. After a little bit of research on the docs I changed my mind to B. On each instance concurrency, it clearly says > By default each Cloud Run instance can receive up to 80 requests at the same time; you can increase this to a maximum of 1000 https://cloud.google.com/run/docs/about-concurrency The maximum number of auto-scaling instances by default is 100, which can be configured depending on the regional quota. With the default max instances it can already handle 100 * 1000 = 100,000 requests concurrently, which should be able to achieve the 50,000 rps requirement. https://cloud.google.com/run/docs/about-instance-autoscaling

afsarkhan
Jul 13, 2024

question says it's 500k and not 50k rps

pancakes22Option: B
Jan 25, 2024

https://cloud.google.com/run/quotas There is no direct limit for: The size of container images you can deploy. The number of concurrent requests served by a Cloud Run service.

tamer_m_SalehOption: D
Dec 23, 2023

At first I through its B, but then I thought about the number of requests that will be over 1 minute, if we calculated it = 30 million request per minute, and based on cloud run pricing this will cost only for the number of requests: 24 USD. so cloud run will cost the company 24 USD / min. which might be a very costly option. But in the cloud run pricing there is 2 modes: - CPU allocated when receiving requests: and there is a cost for CPU and requests - CPU always allocated: and there is only a cost for the CPU and zero price for the number of requests. I think we need someone experiencing the billing of a cloud run under a heavy load like this :)

the1dvOption: D
Jan 16, 2024

Cloud Run can handle this amount if there were like 500 instances which would cost a pretty ridiculous amount per minute, so unfortunately there isnt enough information in this question around how long the gaps are without data to make a proper decision. Autoscaling Managed Instance Groups can scale to zero and 500k per second would be relatively easily handled by a few instances.

Tirthankar17Option: B
Feb 20, 2024

B is correct

AdityaGuptaOption: B
Oct 6, 2023

Answer should B, beasue Data is no SQL data, real-time analysis needed. -> BigTable Cloud Run will help in -> Low Cost (Zero when no event)

DinRush
Oct 12, 2023

Cloud Functions can also scale to 0. But I guess because it manageable scaling can be done faster on functions level

thewalkerOption: B
Nov 11, 2023

MIGs cannot scale the VMs to 0 as per https://cloud.google.com/compute/docs/autoscaler/scaling-cloud-monitoring-metrics#configure_utilization_target So B is the answer.

odacirOption: B
Nov 18, 2023

Compute: Cloud Run vs. Compute Engine autoscaling managed instance group Cloud Run wins because can scale down up to 0 instances -> in Spike workflows will be cheaper. Storage: BigQuery vs. Big Table. 500,000 requests per second it’s not suitable in BQ: https://cloud.google.com/bigquery/quotas “A user can make up to 100 API requests per second to an API method” So answer most be B.

Andoameda9Option: B
Dec 13, 2023

Apart from the reason that cloud run can scale to zero, another benefit in this scenario is the fact Cloud run will provide out of the box revision maintenance for the web service.

wly_alOption: A
Dec 30, 2023

Not receive any request = Cloud Run

kshlgptOption: C
Jan 2, 2024

Cloud Run can't handle 50,000 requests per second A & B is eliminated

kshlgptOption: D
Jan 2, 2024

Cloud Run can't support 50,000 requests per second. Correct answer should be D.

yas_cloudOption: A
Mar 8, 2024

Not sure why this is voted between B and D. It should be A. MIG wont support, that rules out C and D. between BQ and BT, please see that "data will be queried later in real time, based on exact matches of a known set of attributes". This is supported by BQ alone. So I would go with A.

afsarkhanOption: D
Jul 13, 2024

It's hard for Cloud Run to scale to accept 500k rps so choosing Option D