Question 6 of 101

Refer to the exhibit. A Mule application is being designed to be deployed to several CloudHub workers. The Mule application's integration logic is to replicate changed Accounts from Salesforce to a backend system every 5 minutes.

A watermark will be used to only retrieve those Salesforce Accounts that have been modified since the last time the integration logic ran.

What is the most appropriate way to implement persistence for the watermark in order to support the required data replication integration logic?

    Correct Answer: A

    The most appropriate way to implement persistence for the watermark, in order to support the required data replication integration logic in a Mule application deployed to several CloudHub workers, is by using a Persistent Object Store. A Persistent Object Store is designed for storing key-value pairs with support for persistence and retrieval across different execution instances, making it suitable for maintaining state information like watermarks. This ensures consistency and reliability across multiple CloudHub workers.

Question 7 of 101

Refer to the exhibit. A shopping cart checkout process consists of a web store backend sending a sequence of API invocations to an Experience API, which in turn invokes a Process API. All API invocations are over HTTPS POST. The Java web store backend executes in a Java EE application server, while all API implementations are Mule applications executing in a customer-hosted Mule runtime.

End-to-end correlation of all HTTP requests and responses belonging to each individual checkout instance is required. This is to be done through a common correlation ID, so that all log entries written by the web store backend, Experience API implementation, and Process API implementation include the same correlation ID for all requests and responses belonging to the same checkout instance.

What is the most efficient way (using the least amount of custom coding or configuration) for the web store backend and the implementations of the Experience

API and Process API to participate in end-to-end correlation of the API invocations for each checkout instance?

    Correct Answer: B

    The web store backend generates a new correlation ID value at the start of checkout and sets it on the X-CORRELATION-ID HTTP request header in each API invocation belonging to that checkout. This approach ensures that all systems (web store backend, Experience API, and Process API) can use a consistent correlation ID throughout the entire checkout process without requiring special code or configuration in the Experience API and Process API implementations to generate and manage the correlation ID. This minimizes custom coding and configuration, making it the most efficient approach.

Question 8 of 101

Mule application A receives a request Anypoint MQ message REQU with a payload containing a variable-length list of request objects. Application A uses the For

Each scope to split the list into individual objects and sends each object as a message to an Anypoint MQ queue.

Service S listens on that queue, processes each message independently of all other messages, and sends a response message to a response queue.

Application A listens on that response queue and must in turn create and publish a response Anypoint MQ message RESP with a payload containing the list of responses sent by service S in the same order as the request objects originally sent in REQU.

Assume successful response messages are returned by service S for all request messages.

What is required so that application A can ensure that the length and order of the list of objects in RESP and REQU match, while at the same time maximizing message throughput?

    Correct Answer: C

    To ensure the length and order of the list of objects in RESP match those in REQU, while maximizing message throughput, it is crucial to keep track of the list length and the indices of all objects in REQU. By doing this in the For Each scope and in all communications involving service S, and using persistent storage when creating RESP, you can maintain the original order regardless of the order in which responses are received. This approach allows parallel processing (maximizing throughput) while ensuring the final assembled list is in the correct order.

Question 9 of 101

Refer to the exhibit. A Mule application is deployed to a cluster of two customer-hosted Mule runtimes. The Mule application has a flow that polls a database and another flow with an HTTP Listener.

HTTP clients send HTTP requests directly to individual cluster nodes.

What happens to database polling and HTTP request handling in the time after the primary (master) node of the cluster has failed, but before that node is restarted?

    Correct Answer: C

    When the primary node in a Mule cluster fails, the secondary node automatically takes over as the new primary node. Database polling continues because each node in the cluster independently accesses the database connection pool. However, since HTTP clients send requests directly to individual nodes and not through a load balancer, only the remaining active node will be able to accept HTTP requests. Requests sent to the failed node will be rejected until it is brought back online.

Question 10 of 101

What aspects of a CI/CD pipeline for Mule applications can be automated using MuleSoft-provided Maven plugins?

    Correct Answer: B

    MuleSoft-provided Maven plugins can automate several aspects of a CI/CD pipeline for Mule applications. These include compiling the code, packaging the application, running unit tests, validating the unit test coverage, and deploying the application. Therefore, the aspects mentioned under option B are indeed automatable using these plugins, as they cover all essential elements of a typical CI/CD process, excluding integration tests and importing from API designer which are not typically automated by Maven plugins.