http://pf.tradedoubler.com/unlimited/unlimited_pf_[FeedID]_[AffiliateID].xml.gz
Is this still correct? I have been trying it but just keep getting
Service Temporarily Unavailable
The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.
Hi,
Just a quick question, I am new to trade doubler and also have access to feeds as per above, can I use my fetch script to download the files if they have been Concatenated.
Fetch 1 file containing all my feeds and then unzip into multiple files and them import? Is this working or do you add each file name to the fetch?
Thanks
Richard
Hi Richard,
As far as I know tradedoubler still provide access to individual feeds (compressed) so there is no need to use the concatenated version - and it doesn't take up any more of anyones bandwidth overall...FYI i'm using this format:
http://pf.tradedoubler.com/export/export?export=true&zip=true&a=AFFILIATEID&format=xml&programId=PROGRAMID&pfId=4586&version=1
...obtaining the links from within TradeDoubler by going:
Product Feed > "If you wish to continue using the old interface please click here." > Select Site...
...with the links displayed for each approved merchant.
It would certainly be possible to setup your fetch.php to handle a combined file - if you're happy to use the filenames as extracted the fetch script could be as simple as 1 wget and 1 unzip - however this may make administration awkward if the filenames only contain a merchant ID, in which case you may want to rename them; for example (abbreviated):
wget http://www.example.com/allfeeds.zip
unzip allfeeds.zip
mv 123.xml MerchantA.xml
mv 456.xml MerchantB.xml
mv 789.xml MerchantC.xml
Hope this helps!
Cheers,
David.
Thanks Again David,
I will continue to use the individual files, I couldnt originally find an option to view them individually. My fetch doesnt currently handle gzip files, can you quikcly look at the followijng and advise if this is correct.
#!/bin/sh
#########
# FETCH #
#########
/usr/bin/wget -O "/home/onestop/public_html/feeds/burtons.xml.gz" "{link saved}"
gzip -d burtons.xml.gz
/usr/bin/wget -O "/home/onestop/public_html/feeds/dell.xml.gz" "{link saved}"
gzip -d dell.xml.gz
##########
# IMPORT #
##########
/usr/bin/php /home/onestop/public_html/scripts/import.php @MODIFIED
Thanks
Richard
Hi Richard,
I would always use fully qualified filenames rather than rely on the current directory, so for your gzip commands use something like:
gzip -d "/home/onestop/public_html/feeds/burtons.xml.gz"
...otherwise looks fine!
Cheers,
David.
Hi David,
I have used the fully qualified file names and it appears to be working. However the second time I ran the script i received the following message
gzip: /home/onestop/public_html/feeds/burtons.xml already exists; not overwritten
I assumed that gzip -d "/home/onestop/public_html/feeds/burtons.xml.gz" would delete the zip file after unzipping.
If not is this an option? I would prefer not to just overwrite as this would double the required sick space on my server.
Thanks
Richard
Hi Richard,
Easiest thing to do is probably to insert an rm command before the gzip...
rm "/home/onestop/public_html/feeds/burtons.xml"
gzip -d "/home/onestop/public_html/feeds/burtons.xml.gz"
Cheers,
David.
Hi,
Whilst I have used that format in the past, all my current TradeDoubler feeds are downloaded using this format:
http://pf.tradedoubler.com/export/export?export=true&zip=true&a=[AffiliateID]&format=xml&programId=[FeedID]&pfId=7766&version=2
It would certainly be worth a call to TradeDoubler support if you believe you are using the correct URLs as they would be able to give a definitive answer. The above URLs I get by going to:
AD MANAGEMENT > Product Feed > Use Old Interface (part of the help text) > Select Website
Cheers,
David.