Professional Machine Learning Engineer Exam QuestionsBrowse all questions from this exam

Professional Machine Learning Engineer Exam - Question 105


You work for a gaming company that develops massively multiplayer online (MMO) games. You built a TensorFlow model that predicts whether players will make in-app purchases of more than $10 in the next two weeks. The model’s predictions will be used to adapt each user’s game experience. User data is stored in BigQuery. How should you serve your model while optimizing cost, user experience, and ease of management?

Show Answer
Correct Answer: D

Embedding the model in a streaming Dataflow pipeline allows for low latency predictions on real-time events as they occur, which helps in providing a responsive user experience. Dataflow also supports scaling predictions and integrates well with Pub/Sub, reducing the need for extensive server management. Streaming predictions only when events happen optimizes cost by avoiding unnecessary bulk or client-side predictions, and pushing the results to Cloud SQL ensures persistent and managed storage.

Discussion

12 comments
Sign in to comment
hiromiOption: A
Jun 20, 2023

it seens A (not sure) - https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-create-tensorflow

M25Option: D
Nov 8, 2023

For "used to adapt each user's game experience" points out to non-batch, hence excludes A & B, and embedding the model in the mobile app would not necessarily "optimize cost". Plus, the classical streaming solution builds on Dataflow along with Pub/Sub and BigQuery, embedding ML in Dataflow is low-code https://cloud.google.com/blog/products/data-analytics/latest-dataflow-innovations-for-real-time-streaming-and-aiml and apparently a modified version of the question points to the same direction https://mikaelahonen.com/en/data/gcp-mle-exam-questions/

ciro_li
Jan 27, 2024

there's no need to make a prediction after every in-app purchase event. Am i wrong?

TNT87Option: C
Sep 9, 2023

Answer C

tavva_prudhvi
Sep 27, 2023

Option C, embedding the model in the mobile application, can increase the size of the application and may not be suitable for real-time prediction.

TNT87Option: A
Oct 17, 2023

Yeah its A

pinimichele01Option: A
Oct 15, 2024

Make predictions after every in-app purchase it it not necessary -> A

bc3f222Option: A
Feb 28, 2025

The hint is "You built a TensorFlow model that predicts whether players will make in-app purchases of more than $10 in the next two weeks." this means that for this particular use case prediction is not realtime and batch is in fact suitable. Furthermore, BQML allows you to load tensorflow model for serving. This makes BQML the best choice for cost consideration.

NxtgenOption: D
Jan 4, 2024

These were my reasonings to choose D as best option: B -> Vertex AI would not minimize cost C -> Would not optimize user experience (this may lead to slow running of the game (lag)?) A- > Would not optimize ease of management / automatization D -> Best choice?

tavva_prudhvi
May 10, 2024

Why do you want to make a prediction after every app purchase bro?

SamuelTschOption: D
Jan 8, 2024

D could be correct

Mickey321Option: D
May 15, 2024

Embedding the model in a streaming Dataflow pipeline allows low latency predictions on real-time events published to Pub/Sub. This provides a responsive user experience. Dataflow provides a managed service to scale predictions and integrate with Pub/Sub, without having to manage servers. Streaming predictions only when events occur optimizes cost compared to bulk or client-side prediction. Pushing results to Cloud SQL provides a managed database for persistence. In contrast, options A and B use inefficient batch predictions. Option C increases mobile app size and cost.

phani49Option: D
Dec 20, 2024

Why D is the Best Choice: It provides real-time predictions, which is crucial for a good user experience in an MMO setting. It leverages Google Cloud’s managed services (Dataflow, Pub/Sub, Cloud SQL) to reduce operational overhead and simplify management. It allows you to centrally manage your model and easily update it without requiring changes to client applications. It optimizes cost by using a pay-as-you-go, autoscaling service rather than running large-scale batch jobs or deploying models on individual user devices. Option A: Import model into BigQuery ML and do batch predictions. User Experience: Batch predictions are not real-time. This approach introduces a significant delay between data ingestion and predictions. Not ideal if you need to adapt the user experience quickly based on recent behavior.

NamitSehgalOption: B
Feb 20, 2025

BigQuery ML is a useful tool for certain machine learning tasks, it's not the right tool for serving a complex TensorFlow model and integrating it into a game's user experience adaptation system. Vertex AI Prediction is a better choice for this scenario due to its superior support for serving complex models, its optimized infrastructure for serving, and its ease of management.

desertlotus1211Option: C
Feb 26, 2025

it's an online gaming service- you in to stream data in realtime, not batch processing

desertlotus1211
Feb 26, 2025

My mistake - I meant to click D.