Curl init php doesn't work. Let's hone our skills in working with cURL. A few words about other useful cURL options

Those who use cURL, after updating to 5.6.1, 5.5.17, encountered the fact that the cURL module stopped working. The problem has not gone away since then. Even in the current version of PHP 5.6.4, this problem persists.

How do you know if cURL is working for you?

Create a php file and copy there:

Open it from the server. If the output is something like:

Array ( => 468736 => 3 => 3997 => 0 => 7.39.0 => x86_64-pc-win32 => OpenSSL/1.0.1j => 1.2.7.3 => Array ( => dict => file => ftp => ftps => gopher => http => https => imap => imaps => ldap => pop3 => pop3s => rtsp => scp => sftp => smtp => smtps => telnet => tftp) )

This means that cURL is fine, if instead there is a PHP error, then there is a problem.

First, of course, check the php.ini file, find the line there

Extension=php_curl.dll

And make sure there is no semicolon in front of it.

If everything is so, but cURL does not work, then you can run another test to confirm that the situation is unusual. Create another php file with the content:

Search for cURL in your browser; if there is only one match, then the cURL module is not loaded:

At the same time, both Apache and PHP work as usual.

There are three solutions:

  1. Method one (not kosher). If you have PHP 5.6.*, then take version PHP 5.6.0, from there take old file php_curl.dll and replace it with your new one from the version, for example, PHP 5.6.4. For those who have PHP 5.5.17 and higher, you need to take the same file from PHP 5.5.16 and replace it as well. There is only one problem - finding these old versions. You can, of course, poke around in http://windows.php.net/downloads/snaps/php-5.6, but personally I didn’t find what I needed there. And the decision itself is somehow not entirely kosher.
  2. Method two (very fast, but also not kosher). From the PHP directory, copy the libssh2.dll file to the Apache24bin directory and restart Apache.
  3. Method three (kosher - kosher people applaud while standing). You need to add your PHP directory to PATH. How to do this is described very well in the official documentation.

We check:

Voila, the cURL section is in place.

Why is that? Where did this problem come from? There is no answer to this question, although the mechanism of its occurrence has already been described.

The problem seems to be related to the fact that 5.6.1 should have been released with the updated libcurl 7.38.0. But this is not known for certain; PHP authors point to Apache, saying there are some bugs there.

How the problem occurs: If the system PATH does not include the PHP directory, then when the Apache service starts, it is not able to find the new dll (libssh2.dll), which is a dependency for php_curl.

Related bug reports:

Fatal error: Call to undefined function curl_multi_init() in ...

In general, there seemed to be problems with cURL in PHP, if not always, then very often. While googling my problem, I came across threads, some of which were more than a dozen years old.

In addition, googling yielded several more conclusions:

There are enough “instructions for idiots” on the Internet, which tell you in detail, with pictures, how to uncomment the line extension=php_curl.dll in the php.ini file.

On the official PHP website, in the cURL installation section, there are only two suggestions regarding the Windows system:

To work with this module in Windows files libeay32.dll and ssleay32.dll must exist in the system PATH environment variable. You do not need the libcurl.dll file from the cURL site.

I've read them a dozen times. Switched to English language and read it several more times in English. Each time I become more and more convinced that these two sentences were written by animals, or that someone simply jumped their butt on the keyboard - I do not understand their meaning.

There are also some crazy tips and instructions (I even managed to try some).

On the PHP bug reporting site, I’m already close to figuring out what needs to be included in the system PATH variable to include the directory with PHP.

In general, for those who have a problem with cURL and who need to “include the directory with PHP in the system PATH variable,” go to the instructions already mentioned above http://php.net/manual/ru/faq.installation.php#faq .installation.addtopath . Everything is simple there, and, most importantly, it is written in human language what needs to be done.

cURL is a special tool that is designed to transfer files and data using URL syntax. This technology supports many protocols such as HTTP, FTP, TELNET and many others. cURL was originally designed to be a tool command line. Luckily for us, the cURL library is supported by the language PHP programming. In this article we will look at some of the advanced functions of cURL, and also touch on the practical application of the acquired knowledge using PHP.

Why cURL?

In fact, there are quite a few alternative ways to sample web page content. In many cases, mainly due to laziness, I used simple PHP functions instead of cURL:

$content = file_get_contents("http://www.nettuts.com"); // or $lines = file("http://www.nettuts.com"); // or readfile("http://www.nettuts.com");

However, these functions have virtually no flexibility and contain a huge number of shortcomings in terms of error handling, etc. Plus, there are certain tasks that you simply can't solve with these standard functions: interaction with cookies, authentication, form submission, file upload, etc.

cURL is a powerful library that supports many different protocols, options, and provides detailed information about URL requests.

Basic structure

  • Initialization
  • Assignment of parameters
  • Execution and fetching result
  • Freeing up memory

// 1. initialization $ch = curl_init(); // 2. specify parameters, including url curl_setopt($ch, CURLOPT_URL, "http://www.nettuts.com"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_HEADER, 0); // 3. get HTML as the result $output = curl_exec($ch); // 4. close the connection curl_close($ch);

Step #2 (that is, calling curl_setopt()) will be discussed much more in this article than all other steps, because At this stage, all the most interesting and useful things that you need to know happen. In cURL there are a huge number of different options that must be specified in order to be able to configure the URL request in the most careful way. We will not consider the entire list, but will focus only on what I consider necessary and useful for this lesson. You can study everything else yourself if this topic interests you.

Error Checking

In addition, you can also use conditional statements to check the operation for success:

// ... $output = curl_exec($ch); if ($output === FALSE) ( echo "cURL Error: " . curl_error($ch); ) // ...

Here I ask you to note a very important point: we must use “=== false” for comparison, instead of “== false”. For those who are not in the know, this will help us distinguish between an empty result and a boolean value false, which will indicate an error.

Receiving the information

Another additional step is to obtain data about the cURL request after it has been executed.

// ... curl_exec($ch); $info = curl_getinfo($ch); echo "Took " . $info["total_time"] . "seconds for url". $info["url"]; //…

The returned array contains the following information:

  • "url"
  • "content_type"
  • "http_code"
  • “header_size”
  • “request_size”
  • "filetime"
  • “ssl_verify_result”
  • “redirect_count”
  • “total_time”
  • “namelookup_time”
  • “connect_time”
  • “pretransfer_time”
  • “size_upload”
  • “size_download”
  • “speed_download”
  • “speed_upload”
  • “download_content_length”
  • “upload_content_length”
  • “starttransfer_time”
  • “redirect_time”

Redirect detection depending on browser

In this first example we will write code that can detect URL redirects, based on different browser settings. For example, some websites redirect browsers cell phone, or any other device.

We're going to use the CURLOPT_HTTPHEADER option to define our outgoing HTTP headers, including the user's browser name and available languages. Eventually we will be able to determine which sites are redirecting us to different URLs.

// test the URL $urls = array("http://www.cnn.com", "http://www.mozilla.com", "http://www.facebook.com"); // testing browsers $browsers = array("standard" => array ("user_agent" => "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5 .6 (.NET CLR 3.5.30729)", "language" => "en-us,en;q=0.5"), "iphone" => array ("user_agent" => "Mozilla/5.0 (iPhone; U ; CPU like Mac OS X; en) AppleWebKit/420+ (KHTML, like Gecko) Version/3.0 Mobile/1A537a Safari/419.3", "language" => "en"), "french" => array ("user_agent" => "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; GTB6; .NET CLR 2.0.50727)", "language" => "fr,fr-FR;q=0.5")); foreach ($urls as $url) ( echo "URL: $url\n"; foreach ($browsers as $test_name => $browser) ( $ch = curl_init(); // specify the url curl_setopt($ch, CURLOPT_URL, $url); // specify headers for the browser curl_setopt($ch, CURLOPT_HTTPHEADER, array("User-Agent: ($browser["user_agent"])", "Accept-Language: ($browser["language"])" )); // we don't need the contents of the page curl_setopt($ch, CURLOPT_NOBODY, 1); // we need to get HTTP headers curl_setopt($ch, CURLOPT_HEADER, 1); // return results instead of output curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $output = curl_exec($ch); curl_close($ch); // was there an HTTP redirect? if (preg_match("!Location: (.*)!", $output, $matches)) ( echo " $test_name: redirects to $matches\n"; ) else ( echo "$test_name: no redirection\n"; ) ) echo "\n\n"; )

First, we specify a list of URLs of sites that we will check. More precisely, we need the addresses of these sites. Next we need to define browser settings to test each of these URLs. After this, we will use a loop in which we will go through all the results obtained.

The trick we use in this example to set cURL settings will allow us to get not the content of the page, but only the HTTP headers (stored in $output). Next, using a simple regex, we can determine whether the string “Location:” was present in the received headers.

When you run this code, you should get something like the following result:

Creating a POST request to a specific URL

When forming a GET request, the transmitted data can be passed to the URL via a “query string”. For example, when you search on Google, the search terms are located in address bar new URL:

Http://www.google.com/search?q=ruseller

In order to simulate this request, you don't need to use cURL facilities. If laziness completely overcomes you, use the “file_get_contents()” function to get the result.

But the thing is that some HTML forms submit POST request s. The data of these forms is transported through the body of the HTTP request, and not as in the previous case. For example, if you filled out a form on a forum and clicked on the search button, then most likely a POST request will be made:

Http://codeigniter.com/forums/do_search/

We can write a PHP script that can simulate this kind of URL request. First let's create a simple file to accept and display POST data. Let's call it post_output.php:

Print_r($_POST);

Then we create a PHP script to make the cURL request:

$url = "http://localhost/post_output.php"; $post_data = array ("foo" => "bar", "query" => "Nettuts", "action" => "Submit"); $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); // indicate that we have a POST request curl_setopt($ch, CURLOPT_POST, 1); // add variables curl_setopt($ch, CURLOPT_POSTFIELDS, $post_data); $output = curl_exec($ch); curl_close($ch); echo $output;

When you run this script you should get a result like this:

Thus, the POST request was sent to the post_output.php script, which in turn output the superglobal $_POST array, the contents of which we obtained using cURL.

Uploading a file

First, let's create a file in order to generate it and send it to the upload_output.php file:

Print_r($_FILES);

And here is the script code that performs the above functionality:

$url = "http://localhost/upload_output.php"; $post_data = array ("foo" => "bar", // file to upload "upload" => "@C:/wamp/www/test.zip"); $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $post_data); $output = curl_exec($ch); curl_close($ch); echo $output;

When you want to upload a file, all you have to do is pass it as a normal post variable, preceded by an @ symbol. When you run the written script, you will get the following result:

Multiple cURL

One of the most strengths cURL is the ability to create "multiple" cURL handlers. This allows you to open a connection to multiple URLs simultaneously and asynchronously.

In the classic version of the cURL request, the execution of the script is suspended, and it waits for the completion of the request URL operation, after which the script can continue. If you intend to interact with a whole bunch of URLs, this will lead to quite a significant investment of time, since in the classic version you can only work with one URL at a time. However, we can correct this situation by using special handlers.

Let's look at the example code I took from php.net:

// create several cURL resources $ch1 = curl_init(); $ch2 = curl_init(); // specify the URL and other parameters curl_setopt($ch1, CURLOPT_URL, "http://lxr.php.net/"); curl_setopt($ch1, CURLOPT_HEADER, 0); curl_setopt($ch2, CURLOPT_URL, "http://www.php.net/"); curl_setopt($ch2, CURLOPT_HEADER, 0); //create a multiple cURL handler $mh = curl_multi_init(); //add several handlers curl_multi_add_handle($mh,$ch1); curl_multi_add_handle($mh,$ch2); $active = null; //execute do ( $mrc ​​= curl_multi_exec($mh, $active); ) while ($mrc == CURLM_CALL_MULTI_PERFORM); while ($active && $mrc ​​== CURLM_OK) ( if (curl_multi_select($mh) != -1) ( do ( $mrc ​​= curl_multi_exec($mh, $active); ) while ($mrc == CURLM_CALL_MULTI_PERFORM); ) ) //closing curl_multi_remove_handle($mh, $ch1); curl_multi_remove_handle($mh, $ch2); curl_multi_close($mh);

The idea is that you can use multiple cURL handlers. Using a simple loop, you can keep track of which requests have not yet completed.

There are two main loops in this example. First do-while loop calls the curl_multi_exec() function. This function is not blockable. It runs as fast as it can and returns the status of the request. As long as the returned value is the constant 'CURLM_CALL_MULTI_PERFORM', this means that the work has not yet completed (for example, in this moment sending is in progress http headers in URL); That's why we keep checking this return value until we get a different result.

In the next loop we check the condition while the variable $active = "true". It is the second parameter to the curl_multi_exec() function. The value of this variable will be "true" as long as any of the existing changes are active. Next we call the curl_multi_select() function. Its execution is "blocked" while there is at least one active connection, until a response is received. When this happens, we return to the main loop to continue executing queries.

Now let's apply this knowledge to an example that will be really useful for a large number of people.

Checking links in WordPress

Imagine a blog with a huge number of posts and messages, each of which contains links to external Internet resources. Some of these links might already be dead for various reasons. The page may have been deleted or the site may not be working at all.

We are going to create a script that will analyze all links and find non-loading websites and 404 pages, and then provide us with a detailed report.

Let me say right away that this is not an example of creating a plugin for WordPress. This is absolutely a good testing ground for our tests.

Let's finally begin. First we need to fetch all the links from the database:

// configuration $db_host = "localhost"; $db_user = "root"; $db_pass = ""; $db_name = "wordpress"; $excluded_domains = array("localhost", "www.mydomain.com"); $max_connections = 10; // initialization of variables $url_list = array(); $working_urls = array(); $dead_urls = array(); $not_found_urls = array(); $active = null; // connect to MySQL if (!mysql_connect($db_host, $db_user, $db_pass)) ( die("Could not connect: " . mysql_error()); ) if (!mysql_select_db($db_name)) ( die("Could not select db: " . mysql_error()); ) // select all published posts with links $q = "SELECT post_content FROM wp_posts WHERE post_content LIKE "%href=%" AND post_status = "publish" AND post_type = "post ""; $r = mysql_query($q) or die(mysql_error()); while ($d = mysql_fetch_assoc($r)) ( // fetch links using regular expressions if (preg_match_all("!href=\"(.*?)\"!", $d["post_content"], $matches)) ( foreach ($matches as $url) ( $tmp = parse_url($url) ; if (in_array($tmp["host"], $excluded_domains)) ( continue; ) $url_list = $url; ) ) ) // remove duplicates $url_list = array_values(array_unique($url_list)); if (!$url_list) ( die("No URL to check"); )

First, we generate configuration data for interaction with the database, then we write a list of domains that will not participate in the check ($excluded_domains). We also define a number characterizing the number of maximum simultaneous connections that we will use in our script ($max_connections). We then join the database, select the posts that contain links, and accumulate them into an array ($url_list).

The following code is a bit complicated, so walk through it from start to finish:

// 1. multiple handler $mh = curl_multi_init(); // 2. add a set of URLs for ($i = 0; $i< $max_connections; $i++) { add_url_to_multi_handle($mh, $url_list); } // 3. инициализация выполнения do { $mrc = curl_multi_exec($mh, $active); } while ($mrc == CURLM_CALL_MULTI_PERFORM); // 4. основной цикл while ($active && $mrc == CURLM_OK) { // 5. если всё прошло успешно if (curl_multi_select($mh) != -1) { // 6. делаем дело do { $mrc = curl_multi_exec($mh, $active); } while ($mrc == CURLM_CALL_MULTI_PERFORM); // 7. если есть инфа? if ($mhinfo = curl_multi_info_read($mh)) { // это значит, что запрос завершился // 8. извлекаем инфу $chinfo = curl_getinfo($mhinfo["handle"]); // 9. мёртвая ссылка? if (!$chinfo["http_code"]) { $dead_urls = $chinfo["url"]; // 10. 404? } else if ($chinfo["http_code"] == 404) { $not_found_urls = $chinfo["url"]; // 11. рабочая } else { $working_urls = $chinfo["url"]; } // 12. чистим за собой curl_multi_remove_handle($mh, $mhinfo["handle"]); // в случае зацикливания, закомментируйте данный вызов curl_close($mhinfo["handle"]); // 13. добавляем новый url и продолжаем работу if (add_url_to_multi_handle($mh, $url_list)) { do { $mrc = curl_multi_exec($mh, $active); } while ($mrc == CURLM_CALL_MULTI_PERFORM); } } } } // 14. завершение curl_multi_close($mh); echo "==Dead URLs==\n"; echo implode("\n",$dead_urls) . "\n\n"; echo "==404 URLs==\n"; echo implode("\n",$not_found_urls) . "\n\n"; echo "==Working URLs==\n"; echo implode("\n",$working_urls); function add_url_to_multi_handle($mh, $url_list) { static $index = 0; // если у нас есть ещё url, которые нужно достать if ($url_list[$index]) { // новый curl обработчик $ch = curl_init(); // указываем url curl_setopt($ch, CURLOPT_URL, $url_list[$index]); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($ch, CURLOPT_NOBODY, 1); curl_multi_add_handle($mh, $ch); // переходим на следующий url $index++; return true; } else { // добавление новых URL завершено return false; } }

Here I will try to explain everything in detail. The numbers in the list correspond to the numbers in the comment.

  1. 1. Create a multiple handler;
  2. 2. We will write the add_url_to_multi_handle() function a little later. Each time it is called, processing of a new url will begin. Initially, we add 10 ($max_connections) URLs;
  3. 3. To get started, we must run the curl_multi_exec() function. As long as it returns CURLM_CALL_MULTI_PERFORM, we still have something to do. We need this mainly to create connections;
  4. 4. Next comes the main loop, which will run as long as we have at least one active connection;
  5. 5. curl_multi_select() hangs waiting for the URL search to complete;
  6. 6. Once again, we need to get cURL to do some work, namely fetch the return response data;
  7. 7. Information is verified here. As a result of executing the request, an array will be returned;
  8. 8. The returned array contains a cURL handler. We will use it to select information about a separate cURL request;
  9. 9. If the link was dead, or the script timed out, then we should not look for any http code;
  10. 10. If the link returned us a 404 page, then the http code will contain the value 404;
  11. 11. Otherwise, we have a working link in front of us. (You can add additional checks for error code 500, etc...);
  12. 12. Next we remove the cURL handler because we no longer need it;
  13. 13. Now we can add another url and run everything we talked about before;
  14. 14. At this step, the script completes its work. We can remove everything we don’t need and generate a report;
  15. 15. Finally, we will write a function that will add the url to the handler. The static variable $index will be incremented every time this function will be called.

I used this script on my blog (with some broken links that I added on purpose to test it) and got the following result:

In my case, the script took a little less than 2 seconds to crawl through 40 URLs. The increase in productivity is significant when working with more big amount URL addresses. If you open ten connections at the same time, the script can execute ten times faster.

A few words about other useful cURL options

HTTP Authentication

If on URL address If you have HTTP authentication, you can easily use the following script:

$url = "http://www.somesite.com/members/"; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); // specify the username and password curl_setopt($ch, CURLOPT_USERPWD, "myusername:mypassword"); // if redirection is allowed curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); // then save our data in cURL curl_setopt($ch, CURLOPT_UNRESTRICTED_AUTH, 1); $output = curl_exec($ch); curl_close($ch);

FTP upload

PHP also has a library for working with FTP, but nothing prevents you from using cURL tools here:

// open the file $file = fopen("/path/to/file", "r"); // the url should contain the following content $url = "ftp://username: [email protected]:21/path/to/new/file"; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_UPLOAD, 1); curl_setopt($ch, CURLOPT_INFILE, $fp); curl_setopt($ch, CURLOPT_INFILESIZE, filesize("/path/to/file")); // specify the ASCII mod curl_setopt($ch, CURLOPT_FTPASCII, 1); $output = curl_exec ($ch); curl_close($ch);

Using Proxy

You can perform your URL request through a proxy:

$ch = curl_init(); curl_setopt($ch, CURLOPT_URL,"http://www.example.com"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); // specify the address curl_setopt($ch, CURLOPT_PROXY, "11.11.11.11:8080"); // if you need to provide a username and password curl_setopt($ch, CURLOPT_PROXYUSERPWD,"user:pass"); $output = curl_exec($ch); curl_close($ch);

Callback functions

It is also possible to specify a function that will be triggered before completion cURL works request. For example, while the response content is loading, you can start using the data without waiting for it to fully load.

$ch = curl_init(); curl_setopt($ch, CURLOPT_URL,"http://net.tutsplus.com"); curl_setopt($ch, CURLOPT_WRITEFUNCTION,"progress_function"); curl_exec($ch); curl_close($ch); function progress_function($ch,$str) ( echo $str; return strlen($str); )

A function like this MUST return the length of the string, which is a requirement.

Conclusion

Today we learned how you can use the cURL library for your own selfish purposes. I hope you enjoyed this article.

Thank you! Have a good day!

We have: php 5.2.3, Windows XP, Apache 1.3.33
Problem - the cURL module is not detected if PHP is run under Apache
In php.ini extension=php_curl.dll is uncommented, extension_dir is set correctly,
libeay32.dll and ssleay32.dll copied to c:\windows\system32.
However, the phpinfo() function does not show the cURL module among those installed, and when Apache is launched, the following is written to the log:

PHP Startup: Unable to load dynamic library "c:/php/ext/php_curl.dll" - The specified module was not found.

If you run php from the command line, then scripts containing commands from cURL work normally, but if you run it from Apache, they produce the following:
Fatal error: Call to undefined function: curl_init() - regardless of how PHP is installed - as CGI or as a module.

On the Internet I have repeatedly come across a description of this problem - specifically for the cURL module, but the solutions that were proposed there do not help. Moreover, I already changed PHP 5.2 to PHP 5.2.3 - it still didn’t help.

David Mzareulyan[dossier]
I have one php.ini - I checked it by searching the disk. The fact that the same php.ini is used is easily confirmed by the fact that changes made to it affect both the launch of scripts from Apache and from the command line.

Daniil Ivanov[dossier] Better make a file with a call

and open it through a browser.
And then run it in the command line php line-i | grep ini and check the paths to php.ini as php sees them, and not by the presence of the file on disk.

Daniil Ivanov[dossier] What does php -i output? The default binary may look for the config in a different location, depending on the compilation options. This is not the first time I have encountered the fact that mod_php.dll and php.exe look at different ini files and what works in one does not work in the other.

Vasily Sviridov[dossier]
php -i produces the following:

Configuration File (php.ini) Path => C:\WINDOWS
Loaded Configuration File => C:\PHP\php.ini

Moving the php.ini file to the Windows directory does not change the situation.

Daniil Ivanov[dossier]
What about the other modules? For example php_mysql??? Connects? Or is it just cURL that's so nasty?

Hmm, it doesn’t load for me either... On a very different configuration (Apache 2.2 plus PHP 5.1.6 under Zend Studio). But that's not the point. An experiment with launching Apache from the command line (more precisely from FAR) showed something interesting. Without trying to connect Curl, everything starts in a bundle. When I try to connect Kurl it gives an error in... php5ts.dll.

Hello!
I had a similar problem, I was looking for a solution for a long time, I installed more new version RNR, I finally found this forum. There was no solution here, so I tried further myself.

I installed Zend Studio, and before that there was an earlier version of PHP. Perhaps some of them installed their own libraries, and they remained there - outdated.

Thanks for the tips, especially the last one from "Nehxby". I went to C:\windows\system32 and found that the libraries libeay32.dll and ssleay32.dll are not the same size as the original ones. I installed memcached, maybe after this. So, if you added anything, in system32 go :)

I had the same problem, I used the command php -i | grep ini
showed that the library zlib1.dll is missing
it was in the folder with Apache, I wrote a copy in the folder with PHP
I repeated the command, it showed that the zlib.dll library was missing, I wrote it into the Apache folder and everything worked.
and all the libraries were also php5ts.dll, so take into account the presence of all the necessary libraries.

I decided to add it. Because I also encountered this problem. I came across this forum through a link on another site. In general, all the proposed options are nothing more than crutches. the essence of the solution in Windows. you need to set the PATH variable. indicating where your PHP is located. and hallelujah curl does not give any errors. like other libraries...