Professional Machine Learning Engineer Exam QuestionsBrowse all questions from this exam

Professional Machine Learning Engineer Exam - Question 49


You work for an online travel agency that also sells advertising placements on its website to other companies. You have been asked to predict the most relevant web banner that a user should see next. Security is important to your company. The model latency requirements are 300ms@p99, the inventory is thousands of web banners, and your exploratory analysis has shown that navigation context is a good predictor. You want to Implement the simplest solution. How should you configure the prediction pipeline?

Show Answer
Correct Answer: BC

Given the requirements of the problem, where model latency must be 300ms@p99, the inventory of web banners is substantial, and navigation context is a significant predictor, embedding the client on the website is a given operational step. Deploying the gateway on App Engine ensures that requests can be efficiently managed. Using Cloud Bigtable as the database for reading and writing the user's navigation context is crucial because Bigtable supports high throughput with low latency, making it suitable for real-time prediction scenarios. Finally, deploying the model on AI Platform Prediction allows for the necessary computational resources and infrastructure for the machine learning model. This configuration offers a strong balance between simplicity, efficiency, and meeting latency requirements.

Discussion

17 comments
Sign in to comment
Paul_DiracOption: C
Aug 1, 2021

Security => not A. B: doesn't handle processing with banner inventory. D: deployment on GKE is less simple than on AI Platform. Besides, MemoryStore is in-memory while banners are stored persistently. Ans: C

pinimichele01
Apr 25, 2024

B: doesn't handle processing with banner inventory ---> not true...

Celia20210714Option: C
Jul 19, 2021

ANS: C GAE + IAP https://medium.com/google-cloud/secure-cloud-run-cloud-functions-and-app-engine-with-api-key-73c57bededd1 Bigtable at low latency https://cloud.google.com/bigtable#section-2

tavva_prudhviOption: C
Jul 3, 2023

B is also a possible solution, but it does not include a database for storing and retrieving the user's navigation context. This means that every time a user visits a page, the gateway would need to query the website to retrieve the navigation context, which could be slow and inefficient. By using Cloud Bigtable to store the navigation context, the gateway can quickly retrieve the context from the database and pass it to the model for prediction. This makes the overall prediction pipeline more efficient and scalable. Therefore, C is a better option compared to B.

Sum_SumOption: B
Nov 15, 2023

I was torn between B and C. But I really don't see the need for a DB

rightcd
Mar 13, 2024

look at Q80

AnnaROption: B
Apr 26, 2024

Was torn between B and C, but decided for B, because the question states how we should configure the PREDICTION pipeline! Since the exploratory analysis already identified navigation context as good predictor, the focus should be on the prediction model itself.

fredcaramOption: B
Apr 10, 2023

The volume is too low for a Bigtable scenario

M25Option: C
May 9, 2023

Went with C

CloudKidaOption: C
May 9, 2023

Bigtable is a massively scalable NoSQL database service engineered for high throughput and for low-latency workloads. It can handle petabytes of data, with millions of reads and writes per second at a latency that's on the order of milliseconds. Typical use cases for Bigtable are: Fraud detection that leverages dynamically aggregated values. Applications in Fintech and Adtech are usually subject to heavy reads and writes. Ad prediction that leverages dynamically aggregated values over all ad requests and historical data. Booking recommendation based on the overall customer base's recent bookings.

Voyager2Option: C
Jun 5, 2023

C. Embed the client on the website, deploy the gateway on App Engine, deploy the database on Cloud Bigtable for writing and for reading the user's navigation context, and then deploy the model on AI Platform Prediction https://cloud.google.com/architecture/minimizing-predictive-serving-latency-in-machine-learning#choosing_a_nosql_database Typical use cases for Bigtable are: * Ad prediction that leverages dynamically aggregated values over all ad requests and historical data.

friediOption: B
Jun 22, 2023

B is correct, C introduces computational overhead, unnecessarily increasing serving latency.

LitingOption: C
Jul 7, 2023

Bigtable is recommended for storage in the case scenario.

harithacMLOption: B
Jul 14, 2023

secuirity (gateway) + Simplest(ai, not DB)

Mickey321Option: B
Nov 15, 2023

Embed the client on the website, deploy the gateway on App Engine, and then deploy the model on AI Platform Prediction.

gscharlyOption: C
Apr 21, 2024

agree with Paul_Dirac

PhilipKokuOption: C
Jun 6, 2024

C) Big Table for low latency

ccb23ccOption: C
Jun 21, 2024

They affirm that navigation context is a good predictor for your model. Therefore you need to be able to perform the prediction and write the new context (if you get more data you will get a better model) and read (to use it for your prediction). On one hand, BigQuery is a OLAP method so for writings and readings could take it around 2 seconds. On the other hand, BigTable is a OLTP method and can make writings and readings in about 9 milliseconds Conclusion: As one of the requerements is that the latency requirements have to be below 300ms your only choice is using BigTable https://galvarado.com.mx/post/comparaci%C3%B3n-de-bases-de-datos-en-google-cloud-datastore-vs-bigtable-vs-cloud-sql-vs-spanner-vs-bigquery/