You are here:  » Looping Memory Issues - Is there a better way?


Looping Memory Issues - Is there a better way?

Submitted by tekneck on Tue, 2010-02-16 15:27 in

I don't think this is a MagicParser issue but none-the-less, perhaps someone can help me streamline the process.

The idea is to create a single script which:

1. Downloads the XML files from my sources (done)

2. Loop through the array of files and update the database with some information in each of the files. Unfortunately, the data lives at the top tree of the files so I have to load the whole XML file to parse the information I need... some are a couple MB :(

It seems that doing so, and then foreach looping though this consumes so much memory (or I have created an endless loop) that the script never finishes.

If I cut the array down to only a couple items, it will finish but is slow.

Is there a better way to do this? I cannot use your page refresh trick since it runs in CRON and I don't want to have to make scipts on a 1:1 basis for each feed I collect. I can control the max-execution time so timeouts are not the problem.. it just seems to be memory or something else since the script below with 50 xml files was still running after 12 hours... and the total size of all the files was only around 10MB when all added up...

PS: My array below is shortened for the example but is really about 50 values

$array_of_xml_filenames = array(8737,1283,1703,1313,1300,745,15703,1791);
$conn = mysql_connect('localhost','user','pass');
        mysql_select_db('mydb');
foreach ($array_of_xml_filenames as $key => $value) {
$xml_source = "/filelocation/".$value.".xml";
$xml = file_get_contents($xml_source);
MagicParser_parse("string://".$xml,"myRecordHandler","xml|IDF/DEALERS/DEALER/");
print MagicParser_getErrorMessage();
}

Thanks!

Submitted by support on Tue, 2010-02-16 15:35

Hi,

Could you perhaps email me a link to one of the files on your server so that I can download it to my test server and take a look? If you could also let me know which fields at the top of the tree you are interested in; it should be possible to crop and correctly terminate the XML so that there is no need to process the entire tree...

Thanks,
David.