Exam SnowPro Core All QuestionsBrowse all questions from this exam
Question 339

A company needs to read multiple terabytes of data for an initial load as part of a Snowflake migration. The company can control the number and size of CSV extract files.

How does Snowflake recommend maximizing the load performance?

    Correct Answer: C

    For maximizing load performance in Snowflake when reading multiple terabytes of data for an initial load, it is recommended to produce a larger number of smaller files and process the ingestion with appropriately-sized virtual warehouses. This approach helps in parallelizing the workload, which allows Snowflake to take advantage of its massively parallel processing (MPP) architecture, thereby speeding up the data loading process. This method also avoids the potential inefficiencies and higher costs associated with very large files and small virtual warehouses.

Discussion
halolOption: C

c i think https://www.analytics.today/blog/top-3-snowflake-performance-tuning-tactics#:~:text=Avoid%20Scanning%20Files&text=Before%20copying%20data%2C%20Snowflake%20checks,that%20have%20already%20been%20loaded.

OTEOption: C

I'd go for C. A severless approach (A) is usually not recommended for large files due to the higher costs.

AS314Option: A

https://www.snowflake.com/blog/best-practices-for-data-ingestion/ I think A is correct

BigDataBB

Snowpipe is designed for continuous ingestion and is built on COPY. The COPY command enables loading batches of data available in external cloud storage or an internal stage. So for the initial stage i think that i s a better solution COPY So the answer is C

MultiCloudIronManOption: C

correct