Question
How can I fetch multiple pages of results automatically?
Answer
Sometimes you may find it convenient to fetch several pages of user results and put them into a JSON file. I created a PHP script to do just this, which is attached and also pasted below.
<?php
$resource = ''; // put the name of the endpoint here, for example organizations, tickets
$start_page = 1; // first page you want to load
$end_page = 10; // last page you want to load (it could take a while if too big)
$subdomain = ''; // your zendesk subdomain
$userpwd = ''; // your zendesk username and password, or username/token and api token. - {username}/token:{token} or {username}:{password}
function makerequest($url, $userpwd) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type: application/json'));
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_USERPWD, $userpwd);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE);
curl_setopt($ch, CURLOPT_VERBOSE, TRUE);
$result = curl_exec($ch);
if (curl_errno($ch)) $result = curl_error($ch);
curl_close($ch);
return $result;
}
function getpages($resource, $start_page, $end_page, $userpwd, $subdomain){
$url = isset($url) ? $url : "https://$subdomain.zendesk.com/api/v2/$resource.json";
$i = $start_page;
$array = Array();
while (1) {
if ($i > $end_page) {
$file = fopen("results.json", "w") or exit("Unable to open file!");
fwrite($file, json_encode(array($resource => $array)));
fclose($file);
break;
}
$results = json_decode(makerequest($url, $userpwd));
$array = array_merge($array, $results->$resource);
$url = $results->next_page;
$i++;
}
}
getpages($resource, $start_page, $end_page, $userpwd, $subdomain);
?>
Download this script or copy and paste it into a file called paginate.php. Fill in the variables at the beginning, $resource, $start_page, $end_page, $subdomain, and $userpwd, as explained in the comments of the script.
Then, navigate to the folder where you downloaded the file, and from your terminal run the command php paginate.php to create the new file with your results, in a file called results.json. The file is designed to overwrite and not append.
After receiving the results, you can then process as needed. This information may be useful to find which entries in the array were unique vs. duplicates. To do so, output the results into the results.json file, then run one of the below two commands, the first showing the count of unique results, and the second showing the actual unique values:
cat results.json | jq '.' | grep '"id"' > sort > uniq -c > uniq.txt
cat results.json | jq '.' | grep '"id"' > sort > uniq > uniq.txt
Using these commands also requires you to have jq, which can be downloaded from: https://stedolan.github.io/jq/download/.
Additionally, this information can be useful to export the data for import into a new system.
2 comments
Seneca Spurling
It seems that CURLOPT_SSL_VERIFYHOST now needs to be set to 2.
If you don't already have php-curl installed, you'll need that package as well.
If you run it and get a zero-length output file, try changing the $end_page to 2. If you ask for a page that doesn't exist it overwrites the file with nothing at all. However it seems to handle things correctly up to then. You get 100 records per page, so for example if you have 650 records, you'll need to set $end_page to 6.
If you're getting organizations, for example, you can use something like this to get the count so you know how many pages to get. The count may not be accurate over 100,000. See https://developer.zendesk.com/rest_api/docs/support/organizations.
I know this is just an example and not meant to be robust. This isn't a complaint or request for changes, just adding these comments here in case they help someone like myself in the future.
0
Brett Bowser
Thanks for taking the time to share this with everyone Seneca :)
0