Пропарсить страничку на Perl

Гость19 лет в сервисе
Данные заказчика будут вам доступны после подачи заявки
09.10.2006

The target is a single website and only requires spidering one level deep. This application will be in Perl

----

1) Create a queue of URLs by scanning http://www.Fark.com (front page only) for URLs to the comment forums for each topic.

-The Fark.com front page displays the number of comments in each forum.

-Please store this number, along with its respective forum URL, inside a ftext file ( forums.txt), one entry per line, in the following form:

[forum URL, number of comments, timedate stamp] example: [http://forums.fark.com/cgi/fark/comments.pl?IDLink]

2) Pull a URL out of the queue and fetch the HTML page which can be found at that location.

-To be well behaved, do not do this (skip the entry) if the number of comments for a forum hasn't changed since the last scrape.

3) Scan the HTML page looking for image links following "

4) Write these image links into a text file (images.txt), one entry per line, in the following form:

[image link, forum URL] example: [ http://members.cox.net/jboy820/oneflew1.jpg, http://forums.fark.com/cgi/fark/comments.pl?IDLink=2337513]

----

обязатлеьно указывайте

- свой icq и время связи

- сколько времени у Вас уйдет на эту задачу

Заявки фрилансеров