A Snowflake user wants to unload data from a relational table sized 5 GB using CSV. The extract needs to be as performant as possible.
What should the user do?
A Snowflake user wants to unload data from a relational table sized 5 GB using CSV. The extract needs to be as performant as possible.
What should the user do?
Leaving the default MAX_FILE_SIZE to 16 MB will take advantage of parallel operations, which is crucial for performance when unloading data from relational tables. Smaller file sizes allow Snowflake to perform parallel processing, which significantly improves the speed of data extraction.
By default, COPY INTO location statements separate table data into a set of output files to take advantage of parallel operations. The maximum size for each file is set using the MAX_FILE_SIZE copy option. The default value is 16777216 (16 MB) but can be increased to accommodate larger files. The maximum file size supported is 5 GB for Amazon S3, Google Cloud Storage, or Microsoft Azure stages.
https://docs.snowflake.com/en/user-guide/data-unload-considerations#unloading-to-a-single-file
D is more performant https://docs.snowflake.com/en/user-guide/data-unload-considerations
D correct. Performant => parallel processing of file creation/unloading