Professional Data Engineer Exam QuestionsBrowse all questions from this exam

Professional Data Engineer Exam - Question 34


Flowlogistic Case Study -

Company Overview -

Flowlogistic is a leading logistics and supply chain provider. They help businesses throughout the world manage their resources and transport them to their final destination. The company has grown rapidly, expanding their offerings to include rail, truck, aircraft, and oceanic shipping.

Company Background -

The company started as a regional trucking company, and then expanded into other logistics market. Because they have not updated their infrastructure, managing and tracking orders and shipments has become a bottleneck. To improve operations, Flowlogistic developed proprietary technology for tracking shipments in real time at the parcel level. However, they are unable to deploy it because their technology stack, based on Apache Kafka, cannot support the processing volume. In addition, Flowlogistic wants to further analyze their orders and shipments to determine how best to deploy their resources.

Solution Concept -

Flowlogistic wants to implement two concepts using the cloud:

✑ Use their proprietary technology in a real-time inventory-tracking system that indicates the location of their loads

✑ Perform analytics on all their orders and shipment logs, which contain both structured and unstructured data, to determine how best to deploy resources, which markets to expand info. They also want to use predictive analytics to learn earlier when a shipment will be delayed.

Existing Technical Environment -

Flowlogistic architecture resides in a single data center:

✑ Databases

8 physical servers in 2 clusters

- SQL Server `" user data, inventory, static data

3 physical servers

- Cassandra `" metadata, tracking messages

10 Kafka servers `" tracking message aggregation and batch insert

✑ Application servers `" customer front end, middleware for order/customs

60 virtual machines across 20 physical servers

- Tomcat `" Java services

- Nginx `" static content

- Batch servers

✑ Storage appliances

- iSCSI for virtual machine (VM) hosts

- Fibre Channel storage area network (FC SAN) `" SQL server storage

- Network-attached storage (NAS) image storage, logs, backups

✑ 10 Apache Hadoop /Spark servers

- Core Data Lake

- Data analysis workloads

✑ 20 miscellaneous servers

- Jenkins, monitoring, bastion hosts,

Business Requirements -

Build a reliable and reproducible environment with scaled panty of production.

✑ Aggregate data in a centralized Data Lake for analysis

✑ Use historical data to perform predictive analytics on future shipments

✑ Accurately track every shipment worldwide using proprietary technology

✑ Improve business agility and speed of innovation through rapid provisioning of new resources

✑ Analyze and optimize architecture for performance in the cloud

✑ Migrate fully to the cloud if all other requirements are met

Technical Requirements -

✑ Handle both streaming and batch data

✑ Migrate existing Hadoop workloads

✑ Ensure architecture is scalable and elastic to meet the changing demands of the company.

✑ Use managed services whenever possible

✑ Encrypt data flight and at rest

✑ Connect a VPN between the production data center and cloud environment

SEO Statement -

We have grown so quickly that our inability to upgrade our infrastructure is really hampering further growth and efficiency. We are efficient at moving shipments around the world, but we are inefficient at moving data around.

We need to organize our information so we can more easily understand where our customers are and what they are shipping.

CTO Statement -

IT has never been a priority for us, so as our data has grown, we have not invested enough in our technology. I have a good staff to manage IT, but they are so busy managing our infrastructure that I cannot get them to do the things that really matter, such as organizing our data, building the analytics, and figuring out how to implement the CFO' s tracking technology.

CFO Statement -

Part of our competitive advantage is that we penalize ourselves for late shipments and deliveries. Knowing where out shipments are at all times has a direct correlation to our bottom line and profitability. Additionally, I don't want to commit capital to building out a server environment.

Flowlogistic wants to use Google BigQuery as their primary analysis system, but they still have Apache Hadoop and Spark workloads that they cannot move to

BigQuery. Flowlogistic does not know how to store the data that is common to both workloads. What should they do?

Show Answer
Correct Answer: BC

To meet Flowlogistic's requirement of storing common data for both Google BigQuery and Apache Hadoop/Spark workloads, the best solution is to store the data encoded as Avro in Google Cloud Storage. Avro is a widely-used data serialization system that both BigQuery and Hadoop/Spark can read. By storing data in Google Cloud Storage, it becomes accessible to both BigQuery for analytics and to Spark on Dataproc for processing, ensuring interoperability and scalability.

Discussion

16 comments
Sign in to comment
vishal0202Option: C
Sep 21, 2022

C is ans...avro data can be accessed by spark as well

rtcpostOption: C
Oct 22, 2023

C. Store the common data encoded as Avro in Google Cloud Storage. This approach allows for interoperability between BigQuery and Hadoop/Spark as Avro is a commonly used data serialization format that can be read by both systems. Data stored in Google Cloud Storage can be accessed by both BigQuery and Dataproc, providing a bridge between the two environments. Additionally, you can set up data transformation pipelines in Dataproc to work with this data.

JOKKUNOOption: C
Dec 4, 2023

Given the scenario described for Flowlogistic's requirements and technical environment, the most suitable option for storing common data that is used by both Google BigQuery and Apache Hadoop/Spark workloads is: C. Store the common data encoded as Avro in Google Cloud Storage.

duccOption: C
Sep 3, 2022

The answer is C

kelvintoys93Option: C
Nov 30, 2022

"Perform analytics on all their orders and shipment logs, which contain both structured and unstructured data" - BigQuery cant take unstructured data so A and B are out. Storing data in HDFS storage is never recommended unless latency is a requirement, so D is out. That leaves us with GCS. Answer is C

tunstila
Jan 1, 2023

I thought you can now store unstructured data in BigQuery via the object tables announced during Google NEXT 2022... If that's possib;e, does that make B a better choice?

midgooOption: B
Feb 23, 2023

C should be the correct answer. However, please note that Google just released the BigQuery Connector for Hadoop, so if they ask the same question today, B will be the correct answer. A could be correct too, but I cannot see why it has to be partitioned

res3
Jul 4, 2023

If you check the https://cloud.google.com/dataproc/docs/concepts/connectors/bigquery, it unloads the BQ data to GCS, utilizes it, and then deletes it from the GCS. Storing common data twice (at BQ and GCS) will not be the best option compared to 'C' (using GCS as the main common dataset).

nescafe7Option: D
Jul 31, 2023

To simplify the question, Apache Hadoop and Spark workloads that cannot be moved to BigQuery can be handled by DataProc. So the correct answer is D.

drunk_goat82Option: C
Nov 18, 2022

BigQuery can use federated queries to connect to the avro data in GCS while running spark jobs on it. If you duplicate the date you have to manage both data sets.

Mathew106Option: B
Jul 24, 2023

B is the right answer. Common data will lie in BigQuery but will be accessible via the views with SQL in Hadoop workloads.

LeelasOption: D
Nov 4, 2022

In Technical requirements it Was clearly mentioned that they need to Migrate existing Hadoop Cluster for which Data Proc Cluster is a replacement.

solar_makerOption: C
Nov 14, 2022

C, as both capable of AVRO, but the customer does not know what they want to do with the data yet.

gudikingOption: C
Nov 16, 2022

C as it can be used as an external table from BigQuery and with the Cloud Storage Connector it can be used by the Spark workloads (running in Dataproc)

wan2threeOption: A
Nov 16, 2022

A They wanted BigQuery. And connector is all you need to perform Hadoop or spark. Hadoop migration can be done using dataproc.

wan2three
Nov 16, 2022

Also apparently they want all data at one place and want bigQ

DGamesOption: B
Dec 13, 2022

Answer B look ok , because in question they want to store common data which can use by both workload, and using big query and primary analytical tool that would be best option and easy to analysis common data.

korntewinOption: C
Jan 8, 2023

I would vote for C as it can be used for analysis with Bigquery. Furthermore, Hadoop workload can also be transferred to dataproc connected to GCS.

dhvanilOption: C
Jun 13, 2024

Data lake,fully managed, data analytics. Stores structured and unstructured data are keywords,so answer is GCS, OPTION C