Another one, based on the first script.
So this is basic screenscraping for n00bs, now go and make me proud, boys.
And share some of your scripts.
::emp::
Code:
<HTML>
<body>
<h2>Google Image Crawler</h2>
<FORM action ="<?php echo($SERVER['PHP_SELF']); ?>" method ="get">
<INPUT Type="TEXT" name="query" size="30"/>
<INPUT type="SUBMIT" value="Get this!"/>
</FORM>
<?php
// Setting the variables
$GooglePrefix = "http://images.google.com/images?q=";
$query = $_GET['query'];
if ($query != NULL)
{
echo "Looking for ".$query."<br>";
$loop =0;
$CompleteUrl = $GooglePrefix.$query;
$res = $res.webFetcher($CompleteUrl); // we use the function webFetcher to get the page
echo "<hr>";
$resultURLs = do_reg($res, "/,.http(.*)\",/U");
//Displaying the images
for ($i = 0; $i < count($resultURLs); $i++) //we use the length of the returned array to count.
{
$text = $resultURLs[$i]; //$text is set to the item in the result we are at
{
if (!preg_match("/google/", $text, $matches))
echo "<img src=http".$text."><br>";
}
}
echo "done";
}
function do_reg($text, $regex) //returns all the found matches in an array
{
preg_match_all($regex, $text, $regxresult, PREG_PATTERN_ORDER);
return $regresult = $regxresult[1];
}
function webFetcher($url)
{
/* This does exactly what it is named after - it fetches a page from the web, just give it the URL */
$crawl = curl_init(); //the curl library is initiated, the following lines set the curl variables
curl_setopt ($crawl, CURLOPT_URL, $url); //The URL is set
curl_setopt($crawl, CURLOPT_RETURNTRANSFER, 1); //Tells it to return the results in a variable
$resulting = $resulting.curl_exec($crawl); //curl is executed and the results stored in $resulting
curl_close($crawl); // closes the curl procedure.
return $result = $resulting;
}
?>
And share some of your scripts.
::emp::