Among the many powerful connectors Mule Soft has its repository, the
Salesforce connector is one of the most used. I've used the
Salesforce connector to update and insert (upsert) data into sObjects
using both the SOAP API and Bulk Query API, but recently, we got a requirement to get an export of some of
the Salesforce objects.
before going to actual discussion let see what is bulk Query API
Note: To avoid the query time outs or if you have millions of records in S objects.Salesforce recommends that to enable PK chunking when querying tables with more than 10 million records or when a bulk query consistently times out.
Assuming a chunk size
of 250,000 and a starting record ID of 001300000000000, the query is split
into the following 40 queries. Each query is submitted as a separate
batch.
before going to actual discussion let see what is bulk Query API
Bulk Query
Use bulk query to efficiently query large data sets and reduce the number of API
requests. A bulk query can retrieve up to 15 GB of data, divided into 15 1-GB files. The data
formats supported are CSV, XML, and JSON.
-
How Bulk Queries Are Processed
When a bulk query is processed, Salesforce attempts to execute the query. If the query doesn’t execute within the standard 2-minute timeout limit, the job fails and a QUERY_TIMEOUT error is returned. In this case, rewrite a simpler query and resubmit the batch. -
Use Bulk Query
When adding a batch to a bulk query job, the Content-Type in the header for the request must be either text/csv, application/xml, or application/json, depending on the content type specified when the job was created. The actual SOQL statement supplied for the batch is in plain text format.
How Bulk Queries Are Processed
When a bulk query is processed, Salesforce attempts to execute the query.
If the query doesn’t execute within the standard 2-minute timeout limit, the job fails and a
QUERY_TIMEOUT error is returned. In this case, rewrite a simpler query and resubmit the
batch.
If the query succeeds, Salesforce
attempts to retrieve the results. If the results exceed the 1-GB file size limit or take
longer than 10 minutes to retrieve, the completed results are cached and another attempt is
made. After 15 attempts, the job fails and the error message “Retried more than fifteen times”
is returned. In this case, consider using the PK Chunking header to split the query results
into smaller chunks. If the attempts succeed, the results are returned and stored for seven
days.
Note: To avoid the query time outs or if you have millions of records in S objects.Salesforce recommends that to enable PK chunking when querying tables with more than 10 million records or when a bulk query consistently times out.
PK Chunking Header
Use the PK Chunking request header to enable automatic primary key (PK) chunking for a
bulk query job. PK chunking splits bulk queries on very large tables into chunks based on the
record IDs, or primary keys, of the queried records.
Each chunk is processed as a separate batch that counts toward your daily batch limit, and you
must download each batch’s results separately. PK chunking works only with queries that don’t
include SELECT clauses or conditions other than WHERE.
For example: let’s say you enable PK chunking for the following query on an Account
table with 10,000,000
records.
1 | SELECT Name FROM Account |
1 | SELECT Name FROM Account WHERE Id >= 001300000000000 AND Id < 00130000000132G |
2 | SELECT Name FROM Account WHERE Id >= 00130000000132G AND Id < 00130000000264W |
3 | SELECT Name FROM Account WHERE Id >= 00130000000264W AND Id < 00130000000396m |
4 | ... |
5 | SELECT Name FROM Account WHERE Id >= 00130000000euQ4 AND Id < 00130000000fxSK |
- The bulk API query doesn't support the following SOQL:
- COUNT.
- ROLLUP.
- SUM.
- GROUP BY CUBE.
- OFFSET.
- Nested SOQL queries (join).
- Relationship fields (i.e., relationship__r)
Mule 4 Bulk Query Application
I am going to explain the MuleSoft process to create a bulk API query job with PK chunking and retrieve the results. Bulk_API_Query_Main Flow
-
In MuleSoft We need to execute four steps to get the data using SFDC Bulk API1. Create job by specifying Object Name(Contact, Account) And Operation(query)2. Create Salesforce Batch query(SOQL Query:"SELECT Id, FirstName, Lastname, Email, Phone FROM Contact" ) andAssign it to Previously Created job.3. Check Batch info list , an iterate each check the status of each batch , once it’s completed, close the job.4. Final Step Using Salesforce BatchResultStream get Bulk records from SFDC.Step 1: Create a job in Salesforce useing Create Job ConnectorSFDC Connector Configuration:
to enable chunking we need to the header in connector adavanced tab
Step2: Batch Query:
Create a batch Query Useing Create Batch for Query Connector it creates the batch query job using the bulk job ID Created in previous step.
Batch Query Flow:
Connector Configuration:
Step 3: Get Batch Job info:
This flow will halt the application for two minutes to allow the batch query jobs to complete. i have used script component to wait flow processing for few minutes below is the mule flow for get batch info, to get the batch info used batch info list connector.
once we got the response from Batch info list iterate the batches and check status of each batch once status is completed query result using query result stream connector and write the data into Csv file,once all batches are completed close the job using Close job Connector.
Below is the xml configuration for all steps:
<?xml version="1.0" encoding="UTF-8"?>
<mule xmlns:file="http://www.mulesoft.org/schema/mule/file" xmlns:scripting="http://www.mulesoft.org/schema/mule/scripting"
xmlns:ee="http://www.mulesoft.org/schema/mule/ee/core"
xmlns:salesforce="http://www.mulesoft.org/schema/mule/salesforce" xmlns:http="http://www.mulesoft.org/schema/mule/http" xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd
http://www.mulesoft.org/schema/mule/salesforce http://www.mulesoft.org/schema/mule/salesforce/current/mule-salesforce.xsd
http://www.mulesoft.org/schema/mule/ee/core http://www.mulesoft.org/schema/mule/ee/core/current/mule-ee.xsd
http://www.mulesoft.org/schema/mule/scripting http://www.mulesoft.org/schema/mule/scripting/current/mule-scripting.xsd
http://www.mulesoft.org/schema/mule/file http://www.mulesoft.org/schema/mule/file/current/mule-file.xsd">
<http:listener-config name="HTTP_Listener_config" doc:name="HTTP Listener config" doc:id="6c04f245-af8f-47dd-8e3c-4d2db541987d" >
<http:listener-connection host="0.0.0.0" port="8084" />
</http:listener-config>
<salesforce:sfdc-config name="Salesforce_Config" doc:name="Salesforce Config" doc:id="6b3953bc-e517-4328-a489-f635fbdeea5e" >
<salesforce:basic-connection username="${sfdc.user}" password="${sfdc.password}" securityToken="${sfdc.token}" url="${sfdc.url}"/>
</salesforce:sfdc-config>
<configuration-properties doc:name="Configuration properties" doc:id="20fb5170-c64e-433c-ad15-77b13765f57b" file="config-properties.yaml" />
<file:config name="File_Config" doc:name="File Config" doc:id="e226bd49-06a2-41c4-b579-d59b844235b5" >
<file:connection workingDir="${file.location}" />
</file:config>
<sub-flow name="csv" doc:id="07cf456a-8090-4b49-86e1-ba2f4a2052f2" >
<salesforce:query-result-stream doc:name="Query result stream" doc:id="653449ba-42ae-4d6d-9fb3-6721493be26a" config-ref="Salesforce_Config"/>
<file:write doc:name="Write" doc:id="0c6fb17e-a563-4228-b37a-a983b292635e" config-ref="File_Config" path="#['Contact' ++ vars.counter ++ '.csv']">
<file:content ><![CDATA[#[%dw 2.0
output application/csv headerLineNumber = 0 , header = true , separator = ","
ns ns0 http://www.force.com/2009/06/asyncapi/dataload
---
payload.ns0#queryResult.*ns0#records map ( record , indexOfRecord ) -> {
Id: record.*ns0#Id[0],
FirstName: record.ns0#FirstName default "",
LastName: record.ns0#LastName default "",
Email: record.ns0#Email default "",
Phone: record.ns0#Phone default ""
}]]]></file:content>
</file:write>
<logger level="INFO" doc:name="Logger" doc:id="f4d9e3b9-1256-499f-a1d6-590e34c23555" />
</sub-flow>
<flow name="bulkapiqueryFlow" doc:id="ddda9825-8591-4441-a979-dac3f0f43b6e" >
<http:listener doc:name="Listener" doc:id="9fba8c05-486e-488f-9479-9b8033676978" config-ref="HTTP_Listener_config" path="/contacts"/>
<flow-ref doc:name="createJob" doc:id="0de2cfb8-18d0-4d19-8b4d-0552332bc78b" name="createJob"/>
<flow-ref doc:name="createBatchQuery" doc:id="272ef823-50b8-424f-9eb5-108bd3384939" name="createBatchQuery"/>
<flow-ref doc:name="batchInfo" doc:id="e3df6286-2ba4-4577-8671-5207eeeee957" name="batchInfo"/>
</flow>
<sub-flow name="iterateBatchList" doc:id="c79c3f14-4748-42f0-aecd-d0764ecd268f" >
<ee:transform doc:name="Transform Message" doc:id="2f3c282a-f872-40db-a5eb-c18ebdcd92b5">
<ee:message>
</ee:message>
<ee:variables >
<ee:set-variable variableName="finishedList" ><![CDATA[%dw 2.0
output application/java
---
[]]]></ee:set-variable>
</ee:variables>
</ee:transform>
<foreach doc:name="For Each" doc:id="e9fe04de-54f1-4d3b-918a-d36125d3d8a3" collection="#[vars.batchInfoList]">
<choice doc:name="Choice" doc:id="be76bcc2-cb84-4f30-b5bb-c77428003c27" >
<when expression='lower(payload.state) == "completed"' >
<flow-ref doc:name="csv" doc:id="e75b04ec-ac95-4ac1-9684-682678aa4054" name="csv"/>
<ee:transform doc:name="Transform Message" doc:id="9792d7f7-a72e-47cf-8b87-62c1711061a4" >
<ee:message >
</ee:message>
<ee:variables >
<ee:set-variable variableName="finishedList" ><![CDATA[%dw 2.0
output application/java
---
vars.finishedList + payload]]></ee:set-variable>
</ee:variables>
</ee:transform>
</when>
</choice>
</foreach>
<flow-ref doc:name="Flow Reference" doc:id="c3c8819c-cbf4-4c3c-a411-6d713a81f5fb" name="hcsc-bulkapiqueryFlow1" />
</sub-flow>
<flow name="hcsc-bulkapiqueryFlow1" doc:id="d7444207-1c2d-479e-a2f2-c8bfbed442b4" >
<choice doc:name="Choice" doc:id="bdb4a830-1524-451a-b4d2-c5ae5e583d74" >
<when expression="(sizeOf(vars.batchInfoList) - sizeOf(vars.finishedList))== 1" >
<salesforce:close-job doc:name="Close job" doc:id="6dd76406-0fc0-4538-b265-a1cefb2dfb5c" config-ref="Salesforce_Config" jobId="#[vars.jobId]"/>
</when>
<otherwise>
<salesforce:batch-info-list doc:name="Batch info list" doc:id="36a0b8f5-3a1f-45ca-aea7-d1f857aa4e74" config-ref="Salesforce_Config">
<salesforce:job-id ><![CDATA[#[vars.jobId]]]></salesforce:job-id>
</salesforce:batch-info-list>
<ee:transform doc:name="Transform Message" doc:id="29685cff-e000-423b-94fb-ab8634fef1aa" >
<ee:message >
</ee:message>
<ee:variables >
<ee:set-variable variableName="batchInfoList" ><![CDATA[%dw 2.0
output application/java
---
payload filter (not ($.state == "Completed"))]]></ee:set-variable>
</ee:variables>
</ee:transform>
<flow-ref doc:name="iterateBatchList" doc:id="7394452f-cd49-4035-94a7-6a6dc14cfa82" name="iterateBatchList"/>
</otherwise>
</choice>
</flow>
<flow name="batchInfo" doc:id="91b92b94-35d8-4c29-8bd7-691565f13824" >
<scripting:execute doc:name="wait time" doc:id="280fb2cd-f3db-4723-bc4e-597f2f29519d" engine="groovy" target="Executetime">
<scripting:code>sleep(0)</scripting:code>
</scripting:execute>
<salesforce:batch-info-list doc:name="Batch info list" doc:id="8a3959cb-890a-439b-8782-e3d1311cdb36" config-ref="Salesforce_Config">
<salesforce:job-id ><![CDATA[#[vars.jobId]]]></salesforce:job-id>
</salesforce:batch-info-list>
<ee:transform doc:name="Transform Message" doc:id="bae61c88-ef19-42d6-acfb-94a5abef479d" >
<ee:message >
</ee:message>
<ee:variables >
<ee:set-variable variableName="batchInfoList" ><![CDATA[%dw 2.0
output application/java
---
payload map ($)]]></ee:set-variable>
</ee:variables>
</ee:transform>
<flow-ref doc:name="iterateBatchList" doc:id="36ffc11d-3503-4b33-a924-fb89ac40e672" name="iterateBatchList"/>
</flow>
<sub-flow name="hcsc-bulkapiquerySub_Flow" doc:id="f8e117cf-8cbb-4691-9cb3-399bdbddb14c" >
<salesforce:batch-info-list doc:name="Batch info list" doc:id="e5d560e3-2cd6-4ba3-a061-99799b6c971a" config-ref="Salesforce_Config">
<salesforce:job-id ><![CDATA[#[vars.jobId]]]></salesforce:job-id>
</salesforce:batch-info-list>
</sub-flow>
<flow name="createJob" doc:id="18d0f8bf-fea4-4a6d-9f46-0e3242334bcb" >
<salesforce:create-job operation="query" doc:name="Create job" doc:id="1b912f91-148a-4064-bebf-7d84f00d10c9" config-ref="Salesforce_Config" type="Contact" >
<salesforce:headers ><![CDATA[#[%dw 2.0
output application/xml
---
{
"Sforce-Enable-PKChunking": "chunkSize = 500"
}]]]></salesforce:headers>
</salesforce:create-job>
<set-variable value="#[payload.id]" doc:name="jobId" doc:id="5bf2232f-1a29-4487-8c89-ec121e4b2abd" variableName="jobId" />
<logger level="DEBUG" doc:name="Logger" doc:id="0b264562-80d4-4085-bd4c-d97774867098" message="#[%dw 2.0
output application/json
---
{
createJobResponse:
payload
}]"/>
</flow>
<flow name="closeJob" doc:id="ab8070cd-8179-4aca-b84a-e84db451c00a" >
<salesforce:close-job doc:name="Close job" doc:id="bf953074-4b1e-4143-927b-7fb1d83320a3" config-ref="Salesforce_Config" jobId="#[vars.jobId]" target="closejobResult" />
</flow>
<flow name="createBatchQuery" doc:id="ccf5b507-ed03-407b-be44-9ebe36619b44" >
<set-variable value="${sfdc.query}" doc:name="sfQuery" doc:id="56bf8a75-bcd2-47ce-ad98-0b3d8d4ee873" variableName="sfQuery"/>
<salesforce:create-batch-for-query doc:name="Create batch for query" doc:id="dade2164-eb32-4e4b-8cf0-1172c66e34b0" config-ref="Salesforce_Config" jobInfoBatchForQuery="#[payload]">
<salesforce:batch-query><![CDATA[#[vars.sfQuery]]]></salesforce:batch-query>
</salesforce:create-batch-for-query>
<logger level="DEBUG" doc:name="Logger" doc:id="02b896b1-4805-434e-8d7b-53bde8a4d0e6" message="#[%dw 2.0
output application/json
---
{
batchqueryResponse: payload
}]"/>
</flow>
</mule>
Create a batch Query Useing Create Batch for Query Connector it creates the batch query job using the bulk job ID Created in previous step.
Batch Query Flow:
Connector Configuration:
Step 3: Get Batch Job info:
This flow will halt the application for two minutes to allow the batch query jobs to complete. i have used script component to wait flow processing for few minutes below is the mule flow for get batch info, to get the batch info used batch info list connector.
Below is the xml configuration for all steps:
<?xml version="1.0" encoding="UTF-8"?>
<mule xmlns:file="http://www.mulesoft.org/schema/mule/file" xmlns:scripting="http://www.mulesoft.org/schema/mule/scripting"
xmlns:ee="http://www.mulesoft.org/schema/mule/ee/core"
xmlns:salesforce="http://www.mulesoft.org/schema/mule/salesforce" xmlns:http="http://www.mulesoft.org/schema/mule/http" xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd
http://www.mulesoft.org/schema/mule/salesforce http://www.mulesoft.org/schema/mule/salesforce/current/mule-salesforce.xsd
http://www.mulesoft.org/schema/mule/ee/core http://www.mulesoft.org/schema/mule/ee/core/current/mule-ee.xsd
http://www.mulesoft.org/schema/mule/scripting http://www.mulesoft.org/schema/mule/scripting/current/mule-scripting.xsd
http://www.mulesoft.org/schema/mule/file http://www.mulesoft.org/schema/mule/file/current/mule-file.xsd">
<http:listener-config name="HTTP_Listener_config" doc:name="HTTP Listener config" doc:id="6c04f245-af8f-47dd-8e3c-4d2db541987d" >
<http:listener-connection host="0.0.0.0" port="8084" />
</http:listener-config>
<salesforce:sfdc-config name="Salesforce_Config" doc:name="Salesforce Config" doc:id="6b3953bc-e517-4328-a489-f635fbdeea5e" >
<salesforce:basic-connection username="${sfdc.user}" password="${sfdc.password}" securityToken="${sfdc.token}" url="${sfdc.url}"/>
</salesforce:sfdc-config>
<configuration-properties doc:name="Configuration properties" doc:id="20fb5170-c64e-433c-ad15-77b13765f57b" file="config-properties.yaml" />
<file:config name="File_Config" doc:name="File Config" doc:id="e226bd49-06a2-41c4-b579-d59b844235b5" >
<file:connection workingDir="${file.location}" />
</file:config>
<sub-flow name="csv" doc:id="07cf456a-8090-4b49-86e1-ba2f4a2052f2" >
<salesforce:query-result-stream doc:name="Query result stream" doc:id="653449ba-42ae-4d6d-9fb3-6721493be26a" config-ref="Salesforce_Config"/>
<file:write doc:name="Write" doc:id="0c6fb17e-a563-4228-b37a-a983b292635e" config-ref="File_Config" path="#['Contact' ++ vars.counter ++ '.csv']">
<file:content ><![CDATA[#[%dw 2.0
output application/csv headerLineNumber = 0 , header = true , separator = ","
ns ns0 http://www.force.com/2009/06/asyncapi/dataload
---
payload.ns0#queryResult.*ns0#records map ( record , indexOfRecord ) -> {
Id: record.*ns0#Id[0],
FirstName: record.ns0#FirstName default "",
LastName: record.ns0#LastName default "",
Email: record.ns0#Email default "",
Phone: record.ns0#Phone default ""
}]]]></file:content>
</file:write>
<logger level="INFO" doc:name="Logger" doc:id="f4d9e3b9-1256-499f-a1d6-590e34c23555" />
</sub-flow>
<flow name="bulkapiqueryFlow" doc:id="ddda9825-8591-4441-a979-dac3f0f43b6e" >
<http:listener doc:name="Listener" doc:id="9fba8c05-486e-488f-9479-9b8033676978" config-ref="HTTP_Listener_config" path="/contacts"/>
<flow-ref doc:name="createJob" doc:id="0de2cfb8-18d0-4d19-8b4d-0552332bc78b" name="createJob"/>
<flow-ref doc:name="createBatchQuery" doc:id="272ef823-50b8-424f-9eb5-108bd3384939" name="createBatchQuery"/>
<flow-ref doc:name="batchInfo" doc:id="e3df6286-2ba4-4577-8671-5207eeeee957" name="batchInfo"/>
</flow>
<sub-flow name="iterateBatchList" doc:id="c79c3f14-4748-42f0-aecd-d0764ecd268f" >
<ee:transform doc:name="Transform Message" doc:id="2f3c282a-f872-40db-a5eb-c18ebdcd92b5">
<ee:message>
</ee:message>
<ee:variables >
<ee:set-variable variableName="finishedList" ><![CDATA[%dw 2.0
output application/java
---
[]]]></ee:set-variable>
</ee:variables>
</ee:transform>
<foreach doc:name="For Each" doc:id="e9fe04de-54f1-4d3b-918a-d36125d3d8a3" collection="#[vars.batchInfoList]">
<choice doc:name="Choice" doc:id="be76bcc2-cb84-4f30-b5bb-c77428003c27" >
<when expression='lower(payload.state) == "completed"' >
<flow-ref doc:name="csv" doc:id="e75b04ec-ac95-4ac1-9684-682678aa4054" name="csv"/>
<ee:transform doc:name="Transform Message" doc:id="9792d7f7-a72e-47cf-8b87-62c1711061a4" >
<ee:message >
</ee:message>
<ee:variables >
<ee:set-variable variableName="finishedList" ><![CDATA[%dw 2.0
output application/java
---
vars.finishedList + payload]]></ee:set-variable>
</ee:variables>
</ee:transform>
</when>
</choice>
</foreach>
<flow-ref doc:name="Flow Reference" doc:id="c3c8819c-cbf4-4c3c-a411-6d713a81f5fb" name="hcsc-bulkapiqueryFlow1" />
</sub-flow>
<flow name="hcsc-bulkapiqueryFlow1" doc:id="d7444207-1c2d-479e-a2f2-c8bfbed442b4" >
<choice doc:name="Choice" doc:id="bdb4a830-1524-451a-b4d2-c5ae5e583d74" >
<when expression="(sizeOf(vars.batchInfoList) - sizeOf(vars.finishedList))== 1" >
<salesforce:close-job doc:name="Close job" doc:id="6dd76406-0fc0-4538-b265-a1cefb2dfb5c" config-ref="Salesforce_Config" jobId="#[vars.jobId]"/>
</when>
<otherwise>
<salesforce:batch-info-list doc:name="Batch info list" doc:id="36a0b8f5-3a1f-45ca-aea7-d1f857aa4e74" config-ref="Salesforce_Config">
<salesforce:job-id ><![CDATA[#[vars.jobId]]]></salesforce:job-id>
</salesforce:batch-info-list>
<ee:transform doc:name="Transform Message" doc:id="29685cff-e000-423b-94fb-ab8634fef1aa" >
<ee:message >
</ee:message>
<ee:variables >
<ee:set-variable variableName="batchInfoList" ><![CDATA[%dw 2.0
output application/java
---
payload filter (not ($.state == "Completed"))]]></ee:set-variable>
</ee:variables>
</ee:transform>
<flow-ref doc:name="iterateBatchList" doc:id="7394452f-cd49-4035-94a7-6a6dc14cfa82" name="iterateBatchList"/>
</otherwise>
</choice>
</flow>
<flow name="batchInfo" doc:id="91b92b94-35d8-4c29-8bd7-691565f13824" >
<scripting:execute doc:name="wait time" doc:id="280fb2cd-f3db-4723-bc4e-597f2f29519d" engine="groovy" target="Executetime">
<scripting:code>sleep(0)</scripting:code>
</scripting:execute>
<salesforce:batch-info-list doc:name="Batch info list" doc:id="8a3959cb-890a-439b-8782-e3d1311cdb36" config-ref="Salesforce_Config">
<salesforce:job-id ><![CDATA[#[vars.jobId]]]></salesforce:job-id>
</salesforce:batch-info-list>
<ee:transform doc:name="Transform Message" doc:id="bae61c88-ef19-42d6-acfb-94a5abef479d" >
<ee:message >
</ee:message>
<ee:variables >
<ee:set-variable variableName="batchInfoList" ><![CDATA[%dw 2.0
output application/java
---
payload map ($)]]></ee:set-variable>
</ee:variables>
</ee:transform>
<flow-ref doc:name="iterateBatchList" doc:id="36ffc11d-3503-4b33-a924-fb89ac40e672" name="iterateBatchList"/>
</flow>
<sub-flow name="hcsc-bulkapiquerySub_Flow" doc:id="f8e117cf-8cbb-4691-9cb3-399bdbddb14c" >
<salesforce:batch-info-list doc:name="Batch info list" doc:id="e5d560e3-2cd6-4ba3-a061-99799b6c971a" config-ref="Salesforce_Config">
<salesforce:job-id ><![CDATA[#[vars.jobId]]]></salesforce:job-id>
</salesforce:batch-info-list>
</sub-flow>
<flow name="createJob" doc:id="18d0f8bf-fea4-4a6d-9f46-0e3242334bcb" >
<salesforce:create-job operation="query" doc:name="Create job" doc:id="1b912f91-148a-4064-bebf-7d84f00d10c9" config-ref="Salesforce_Config" type="Contact" >
<salesforce:headers ><![CDATA[#[%dw 2.0
output application/xml
---
{
"Sforce-Enable-PKChunking": "chunkSize = 500"
}]]]></salesforce:headers>
</salesforce:create-job>
<set-variable value="#[payload.id]" doc:name="jobId" doc:id="5bf2232f-1a29-4487-8c89-ec121e4b2abd" variableName="jobId" />
<logger level="DEBUG" doc:name="Logger" doc:id="0b264562-80d4-4085-bd4c-d97774867098" message="#[%dw 2.0
output application/json
---
{
createJobResponse:
payload
}]"/>
</flow>
<flow name="closeJob" doc:id="ab8070cd-8179-4aca-b84a-e84db451c00a" >
<salesforce:close-job doc:name="Close job" doc:id="bf953074-4b1e-4143-927b-7fb1d83320a3" config-ref="Salesforce_Config" jobId="#[vars.jobId]" target="closejobResult" />
</flow>
<flow name="createBatchQuery" doc:id="ccf5b507-ed03-407b-be44-9ebe36619b44" >
<set-variable value="${sfdc.query}" doc:name="sfQuery" doc:id="56bf8a75-bcd2-47ce-ad98-0b3d8d4ee873" variableName="sfQuery"/>
<salesforce:create-batch-for-query doc:name="Create batch for query" doc:id="dade2164-eb32-4e4b-8cf0-1172c66e34b0" config-ref="Salesforce_Config" jobInfoBatchForQuery="#[payload]">
<salesforce:batch-query><![CDATA[#[vars.sfQuery]]]></salesforce:batch-query>
</salesforce:create-batch-for-query>
<logger level="DEBUG" doc:name="Logger" doc:id="02b896b1-4805-434e-8d7b-53bde8a4d0e6" message="#[%dw 2.0
output application/json
---
{
batchqueryResponse: payload
}]"/>
</flow>
</mule>
No comments:
Post a Comment