So I was working on a new web app today, and I thought it’d be cool if the URLs it accepted were passed over to the Coral Cache so that a copy of the page was always on file somewhere, just in case it disappeared.

After realizing that I didn’t have to use PHP’s parse_url function to manually create the URL, that I could simply pass it to http://redirect.nyud.net:8090/?url= and let their system do the rest of the work, it was a simple matter of file_get_contents‘ing our newly-constructed redirect URL so it would end up cached.

And there enlies the problem. You see, it seemed like such a good idea…

Unfortunately, the Coral Cache network isn’t known for its speed. It was taking absolutely forever for the script to complete each call, being held up by that one simple line of code.

The only solution I see at the moment is writing a cron job that’ll parse out any URLs that haven’t been marked as having been sent to the cache, but that seems very disconnected and ugly. I’d really like to keep all of this code in one spot (like it is now), if possible.

So, any suggestions from you coders out there? Got some bright ideas for a PHP‘er in need? It’d be great if I could just do an AJAX request to their server and wait for it to return whenever it finished, but that’s not possible (for a variety of reasons, the least of which is the inability to HTTP request cross-domain).

I await your collective brilliance…

Originally published and updated .