How to consume a web service in PHP

Have you ever wondered how you can consume a web service in PHP? Although I code main in C#, I run quite a few sites in PHP and I wanted the ability to automatically notify ping servers when new content is available at my websites. This will bring the search engine crawlers almost immediately to my sites and get my content index faster. This is automatically done in WordPress by pinging PingOmatic but for sites which run my own Content Management System (CMS), I had to do the pinging myself.

You can easily do the pinging in PHP but I wanted the ability to send pings for scheduled content release. For if my content is going to be published on a date in the future, I need to be able to send the ping at that point in time. You can get a cron job to run a script for you or like WordPress does, hook a function to all HTTP requests to see if there are any pings to send. However I find the latter innappropriate and the former is just well PHP scripts and I’d rather have it done in C#.

My idea is to create a Web Service which will accept the title of the content, the URL where the content can be found and the publish date. This will then be saved in a table. A Windows Service will then check the table to see if any pings need to be sent. My Windows Service will run at say 30 mins, pretty much like a cron job.

Okay back to consuming web services in PHP, if you use the PHP function to do that, you will get some problems (I can’t recall exactly which one it was when I was googling). The best way is to use NUSOAP. You just need to download it from SourceForge and upload nusoap.php to your server. Once that is done, you can easily consume any web services as follows:

$client = new nusoap_client('', 'wsdl');

$err = $client->getError();
if ($err) {
//echo '<h2>Constructor error</h2><pre>' . $err . '</pre>';
return $err;
// Doc/lit parameters get wrapped
$param = array('title' => $title, 'url' => $url, 'publishDate' => $publishDate);
$result = $client->call('Schedule', array('parameters' => $param), '', '', false, true);
// Check for a fault
if ($client->fault) {
//echo '<h2>Fault</h2><pre>';
//echo '</pre>';
return $result;
} else {
// Check for errors
$err = $client->getError();
if ($err) {
// Display the error
//echo '<h2>Error</h2><pre>' . $err . '</pre>';
return $err;
} else {
// Display the result
//echo '<h2>Result</h2><pre>';
//echo '</pre>';
return $result;

It is very straightforward to call a web service with nusoap and I recommend it. You can find more examples of how to call web services when you download the codes from Source Forge. The above did the trick for me.

WordPress automatic upgrade not working and asking for FTP details

I run quite a few sites and on some of them I’ve installed WordPress for convenience. Every now and then a new version of WordPress is released and I like to keep my installation of WordPress up-to-date so that I’ve got the latest security patches and new features which have been developed. For earlier versions of WordPress (before 2.7), you had to do the install manually but with WordPress 2.7, you now have the option to upgrade automatically. As soon as a newer version is available, you have a link in the admin panel telling you to update your installation. You should be able to click on the update link and WordPress itself will download the latest files, extract them and update your WordPress accordingly. On one installation, it took no more than 15 seconds to complete the upgrade. However on my other hosting accounts, I’ve found that I just cannot update the software as it keeps asking me for my FTP details.

I know for a fact that the automatic upgrade should work but I wanted to give the FTP connection a go as well and it didn’t run as expected. WordPress couldn’t either extract some files or couldn’t create folders/files. I could have fixed the problem by giving write access to the required folder but I thought it was getting a bit too much as I don’t like the idea of putting my username/password for FTP in the first place. So I decided to find out a solution to the automatic upgrade problem instead and I spent hours researching the topic.

Here are the solutions that I tried:

  • define(‘FS_METHOD’, ‘direct’); in wp_config.php (didn’t work)
  • Give write permissions 777 to the whole wordpress directory for testing (didn’t work)
  • Editing file.php (found in wp_admin/includes) so that getmypid() is returned instead of getmyuid() (didn’t work)

If you’ve got full access to the server where wordpress is installed, you should be able to fix the problem easily but if you’re on shared hosting, things get complicated. My wordpress files/folders are owned by me on the hosting account but when a php script is executed, it runs as the nobody account. This is the default apache user that the server uses to run scripts on the shared hosting and that’s what is causing the headache. WordPress does a test to see if a file is owned by the current user executing the script and if that fails, it promts you for your FTP connection details.

I’ve looked into ways to overcome the problem but the shared hosting account doesn’t give me shell access, otherwise I would have been able to change the owner of the files/folders. If I get to run apache as my own user account, I should be able to get the wordpress automatic upgrade to succeed but I’m still looking for a solution at the moment. I’ve looked into php scripts executing commands (with exec) but I’ve still got to learn how to use that. If you’ve got the solution, then let me know.

Should you update your website/blog regularly to increase your rankings?

Many people believe that updating your website regularly is the way to achieve higher rankings in the search engines. This is not entirely true and I’ll explain why. As the web index grows bigger and bigger, search engines like Google and Bing are looking for better ways to determine what content is worth including in their index and which ones to drop. Duplicate content is obviously not going to work for rankings and if you’re rewriting content by saying the same thing in a different way, then this will not work as well. You need to be adding something original to your articles to make them stand out from all the other articles about the same subject. This is  the long term strategy and will protect you against future algorithm changes just like Google Caffeine.

Now coming back to why updating your website/blog regularly is not worth it if you’re just recycling content, you need to understand how rankings come into the play. Links play an important role in increasings your SERPs but that’s not everything. Search engines try to provide the best results for any given query; that’s their aim and hence they make algorithm changes every now and then to refine the results you see for a more perfect match. If you were to do a cosmetic surgery for some reason or another, you would go to a professional, wouldn’t you? You would consider a surgeon who is known to deliver results and this can be checked by looking at the success rates and past customer experiences and the like. In the search engine world, the same principle applies. If you own a website about “cosmetic surgery”, then you would need to have pages which talk about the different kinds of surgery available, for example, liposuction, nose job, botox etc. By having multiple pages about the different aspects of “cosmetic surgery”, you show off your expertise in this area. You will be trusted in the “cosmetic surgery” industry and search engines will love you for that. This is how domain authority is achieved.

But wait, it’s not as simple as it looks though. If you’re not providing compelling and original content which is of value of visitors of your site, then you wouldn’t move up the rankings. Each new page that you put up on your site can be a boost for the ranking of your keywords only if it is on-topic and original. Google needs to think it’s good enough to keep in its index and by targetting related things to your keywords, you would increase your power of ranking for those keywords.

Creating new content on a daily/weekly/monthly basis can bring you extra traffic provided your content is unique. If these articles address related terms for the theme of your website, you will see move up the SERP over time as you gain more trust from the search engines due to the targetting of secondary keywords/long tail keywords. However if you’re adding articles just for the sake of keeping your website updated frequently, then you will not see any benefits from it.

New content will definitely bring GoogleBot to your site more often but after Google has crawled your new content, it won’t index it if it offers nothing of value. Therefore it is better to take the time required to write a good quality article rather than rushing to write dozens of articles which people are not going to appreciate. Don’t write for the sake of writing, write to address problems that your visitors may encounter, give information that will be helpful to the visitors and in time, you’ll reap the benefits of this long term strategy.

Creating multisites with WordPress 3.0 is not for everyone!

A lot of people want to be able to run as many sites as they want with just one installation of WordPress. Before WordPress version 3.0, you had to use WordPress MU (Multi User) which was not in line with the actual WordPress codes. So basically you would be able to have any number of sites running on WordPress MU but you may miss out on some functionalities. This was not such a big deal for many people who liked the idea of running many websites/blogs from just 1 installation but now with the release of WordPress 3.0, this has been made even more simple.

The greatest benefit of being able to have a many sites on one single wordpress installation is maintenance. You just need to update your files in one place rather than update it in several places as soon as a new version comes out. It is advisable to upgrade to the latest stable version of WordPress as soon as it’s made available. So this is a big time saver in terms of updating but as well as launching new sites.

When I first tried WordPress MU, I was very keen to get my sites running but I eventually found out that to be able to run different domains on WordPress MU, you need to get a wildcard mapping for subdomains. Now because WordPress MU has been merged into WordPress 3.0, you have the same problem. I remember how difficult it was to set up and I eventually gave up.

So if you have your main blog at and want to be hosted on WordPress 3.0, then you’ll have to do a wildcard mapping. Now if you’ve got a dedicated server, you can get it done quite easily but if you’re on shared hosting, then this may be a problem because some hosting companies don’t allow wildcard mapping. If you manage to do it though, there will be another problem. If you had a subdomain like, it will stop working because of the wildcard now. So you’ll have to come up with a way to handle that.

I think the idea of hosting multiple sites/blogs is very appealing especially if you want to create niche websites with just a couple of pages. However WordPress 3.0 is not ideal for that purpose nor is WordPress MU. On the other hand WP Hive which is a plugin for WordPress works brillantly.

When I heard that WordPress 3.0 allows multisite creation, the first thought that came to mind was that WP Hive will become dead (redundant) now but that’s not the case. The official documentation of WordPress 3.0 shows that you need wildcard mapping whereas with WP Hive, you only need to add the domain you want have have wordpress on as an addon domain.

I like simplicity above everything else. For those on shared hosting, I suggest you have a look at WP Hive if having multisites/multiblogs on 1 wordpress installation is what you need.

Hashing passwords with a salt before storing

Applications which require users to register need to store the user details in some kind of data store and if you’re storing sensitive information like passwords, it is imperative that the passwords are encrypted before you store them. Often, it is easier to just store the passwords as clear text to avoid the hassle of having to encrypt/decrypt the password before using them but this will eventually lead to a BIG security flaw in your application.

Some people think that because they’ve got a small web app, they are not really at risk of hackers but the truth is that there are sick people out there who enjoy looking for vulnerabilities in your website. Of course there’s a password to access your dababase and that provides a first level of security but if that database password is compromised, then all your users will be at risk. You might be thinking what’s the worse that can happen if somebody manages to get the passwords of your users, right? Well they will have the power to impersonate that user on your site first of all, but wait this does not end here. If your website just allow registered users to post comments, it’s not just the fact that the hacker will be able to post comments on your site but research has found that many people use the same passwords for a lot of sites. Surely the hacker can get the email address of the person and if he can get into their email, he might also be lucky to find other sites which he can log into as well, pretending to be the said user. This is not something that you wish happen to you, so security is the first thing that we need to think about.

It is important to note that if you have a database administrator, he will have unrestricted access to your database and will be able to read your users’ passwords and may use them in unethical manners (hopefully he won’t,  but you should never take the chance).


Instead of storing the passwords as plain text in the database, you could encrypt the password and that would add a nice layer of obscurity to it but if you can encrypt it, then you can also decrypt it the same way that it was encrypted. This is because encryption/decryption engines use a key to do the work and the key can be guessed or found and this will make it easy to get the password.

Hashing algorithm

A much better way to secure the passwords is to hash them. Hashing is a one way algorithm in the sense that once you’ve got it encrypted (hashed), you cannot get it decrypted. When the user enters his password on your site, you hash the password that he entered and check the hashed value that you get against the hashed value in the database for the user’s record. It it matches, it means the user has entered the correct password.

That’s all good but because many people tend to use common names found in dictionary as their password, hash tables have been created to perform a dictionary attack on the passwords which have been hashed. Say you’ve hashed the password ‘prince’ with SHA1 and a hacker manage to get a positive sign that his hashed password ‘prince’ matches your one. This means that he will know the password. Therefore it is advisable to salt the hash to make dictionary attacks less successful.

Hashing with a salt

Before you actually hash the password, you add a salt to it, so you hash (salt + password). It is better to add the salt at the beginning of the password rather than the end. A salt is just a random word. You can create a random set of characters to use as the salt and store them together with the password in the database. The added benefit is that is two users have the same password, the hash value of their passwords won’t be the same because you’ve got a random salt added to their passwords which means a hacker cannot for sure know whether people are using the same passwords.

It is better to use a random salt rather than a single salt because the latter will make it that little bit easier to crack the password but with a random salt, the hacker will need to perform comparison for each password by using the salt and hashed password. This will increase the time taken for them to get the passwords and give you time to notify your users to change their passwords, if you know that your site’s been compromised that is.

Which cryptography to use?

MD5 has shown weaknesses and there are concerns around SHA1 because of some vulnerabilities. I’d consider using SHA256 although SHA512 is more secure. The reason is SHA512 takes twice as long to compute and I believe SHA256 is good enough for security at the moment. Let’s see what SHA-3 will give us, eh?

30 days challenge without Facebook

I’m finding it really hard these days to focus on my work. It’s been a long time since I decided to rewrite of my websites and I only seem to be gathering ideas rather than putting them to practice. I’ve noticed that when I get bored with what I’m currently doing, I just log onto facebook and see what other people are doing. It’s pretty much just gossip but it’s kind of addictive. Therefore I’ve decided to take a 30 day challenge of not using facebook.

I started this challenge yesterday (16th of June 2010) and I’ll be completing the challenge in a month’s time, on the 16th of July 2010. I don’t have many friends on facebook, probably like 25 friends but the problem that I’m experiencing is that whenever I want a break, I just sign onto facebook and without realising I’ve wasted some 30 mins.

It will be good to see if I become more productive without facebook. I might even consider suspending my account so that I don’t receive any updates through email but for the time being I’m just leaving it as is.

How to connect the Wii to the Internet?

I’ve had my Wii for over 2 years now and it was only a few days back that I actually wondered why I haven’t set up the Wii to connect to the Internet. The thought of being able to surf online on my big 50in TV was enough to get me started. Before I tell you the steps to connect your Wii console to your internet connection, it might be worth stating that I was rather deceived afterwards as I found out that you need to pay to be able to get on the internet through your Wii. You will have to buy Wii points which will allow you to surf on the internet and when they finish, you will need to buy more. It was not worth the hassle if you ask me because I am already paying for broadband internet at home, then why should I pay again to surf on the Wii, I’d rather surf on my latop. If the service was free, then it would be a totally different scenario and being able to surf on big screen TV would be awesome.

Anyway, here’s how you connect you Wii to the Internet:

  1. First make sure you have the password of your router at hand (i’m assuming you have your home internet network protected)
  2. Switch on your Wii and go to Wii Options from the menu (hover over the square images to know which one it is)
  3. Click on Wii settings and then press the arrow on the right to see more options
  4. Choose Internet and then click on Connection Settings
  5. Choose Connection 1 from the list displayed
  6. Then click Wireless Connection followed by Search for an Access Point
  7. Choose your home internet connection from the list and click OK
  8. Click OK and test the connection to see if everything is working fine
  9. You will then be asked to update the Wii, so click Yes

This is pretty much it. When you go on Wii Menu, you will be able to see Wii Shop from which you can purchase points to surf the internet.

How to migrate Microsoft Outlook from one computer to another?

After buying a new laptop, one of the things I had to do was migrating Microsoft Outlook 2007 from my old computer running Vista onto the new laptop running Windows 7. Now it is quite straighforward to copy the emails that you’ve received (or sent) to another PC but getting your email accounts to work on the new machine is a real killer.

If you have just one or two email accounts in MS Outlook 2007, then you can manually create the accounts once you have transferred your emails but if you have like a dozen email accounts, then it might be better to copy them from the old PC to the new one instead. To migrate your emails to another PC, all you have to do is create a backup of the emails. You do that by going into File -> Import & Export and choose ‘Export to a file’ and select “Personal Folder File (.pst)” and then choose the folder you wish to export. Usually it will be the Personal Folders and make sure you tick the box to include all subfolders as well. You might have to repeat this task if you want to copy your Archive Folders as well. Once you saved the .pst file on your old computer, you can then copy it to your new PC. You will have to copy it to the folder designated for Outlook files. It will be something like C:\Users\<your-user-name>\AppData\Local\Microsoft\Outlook\ (replace <your-user-name> with the name of the user that you use to log onto windows). Once this file has been copied there, you can then go back and click on File -> Data File Management (the Data Files tab should be selected now) and you can click on “Add” to add a new datafile to MS Outlook. You will need to select the .pst file that you just copied in outlook user directory. Do it for both the Personal Folders file and Archives one as well if you’ve copied it. Once this is complete, you’ll got all your emails on your new computer.

How do I copy all my Microsoft Outlook 2007 Email Account Settings from one computer to another?

With the above steps, your new computer will have all your emails but not the email account settings required to send/receive emails. If you do not want to enter each configuration (POP3 details, server name, usernames) one by one, then you will have to edit your registry file to do that. Note that passwords are not carried over, therefore you will have to enter the passwords for each account manually on the first send/receive emails and save them so that you do not have to enter them everytime. If you go to your old computer and launch Registry Editor (Start -> Run -> and type regedit.exe), you can then browse to the following:

HKEY_CURRENT_USER\Software\Microsoft\Windows NT\Current Version\Windows Messaging Subsystem\Profiles\Outlook

Right click on the folder and click on Export and save the file (.reg) somewhere on your old computer. Next transfer this file to your new computer and double click on it. This will launch warnings that you’re making changes to the registry, accept them. Note you may need to be logged on as an administrator on windows to perform this. Once this is done, you can launch Microsoft Outlook 2007 to see if it works – you may find that it doesn’t work and you find problems like cannot locate file to open or whatever other errors and this will be because the profile that has been created by the registry editor does not match the name. If that happens, go back to the old PC and look how the file is named there. My one was called DefaultProfile but when I copied it on the new laptop, it was named DefaultOutlook. You just need to rename it as DefaultProfile (same name as it appears on the old computer) and everything should work fine. Re-open outlook and it should work now. Remember that you will need to enter the passwords for all your email accounts when Outlook does its first send and receive emails and you have to check the box that says “save password…”.

For some strange reason, when that was completed for me, Microsoft Outlook had switched my Personal Folders file that I imported with a default one. You just need to re-select your default Personal Folders/Archive .pst files where you copied it on your new computer and you will have all your emails again. Make the Personal Folders file default now and remove the other ones.

I’ve tried to use Windows Easy Transfer to get MS Outlook migrated to my new laptop but it didn’t work. I spent so much time downloading Windows Easy Transfer for my Vista computer (although it did have one, I had to install a newer version), put the public key and selected which files I wanted to transfer (in this case only outlook files) but it didn’t do the job. Actually it just copied the email files and not the email account settings. As a matter of fact, your email account settings are stored in the registry, so you will have to use the above method to get Microsoft Outlook working on your new machine.

Getting started with Fluent NHibernate

Quite a few people have recommended NHibernate to me and because I am currently working on a new project, I decided to try it out. In the past, I’ve used the Data Access Layer (DAL) of the company I was working for. It was brilliant because it would take strongly typed data and return either a collection of your entities or a single entity to you based on what you were doing. That was all good except that it was difficult to customise the DAL. NHibernate seems a very good option but the learning curve is quite steep because you really need to understand how to set it up and configure your entities. The XML mapping is the hardest I’ve heard!

I like simple things and want to get on as quickly as I can with my project. When I researched, I found that the alternative to the laborious xml mapping was Fluent NHibernate. So I’m going Fluent now as I don’t want to be wasting time on writing XML mappings when Fluent NHibernate can do it for me and much more.

Creating your entity is pretty straightforward and doing the mapping is easy to digest as well. However the first time I tried to compile, I ended up with a few errors. Well I needed references to NHibernate.dll, FluentNHibernate.dll and NHibernate.ByteCode.Castle.dll (for lazy loading) to make the web app compile. All in all so far it was easy. It’s amazing that you can just do something like Session.SaveOrUpdate(customer) and your customer data is saved (less code is good). How wicked is that?

Now the problem for me was trying to figure out where NHibernate would be sitting in my application. It replaces the DAL, so do I just have a Business Logic Layer (BLL) now? How am I going to manage the NHibernate sessions? Many people have suggested that you create a session everytime there’s an HTTP request. So you basically write an HTTP module which would intercept all the requests to the web server and inject your logic in it, that is, create an NHibernate session when the request starts and close it when the request ends. The method works fine but I don’t want to be opening a session everytime there’s an http request, I’d rather open a session when I need information from the database. Therefore I’ve decided not to go down the HTTP module route but instead write an NHibernate Session Management class to handle the sessions. Note if you’re using AJAX, you’ll have problems with the HTTP module method because your session will be closed at the end of the request (when flushing out the content to the browser), so your ajax call will fail because there’s no session associated with the http request anymore.

Creating NHibernate sessions is an inexpensive task so you can create as many as you want and close them afterwards but creating the NHibernate SessionFactory object is what consumes the resources. It is therefore advisable to create the SessionFactory in Global.asax file so that it is only created once but available for the lifetime of the web application. Only when your web server reboots would the session factory object be recreated. The session factory would create in memory representation of your dabatase and the relationship between the tables.

Now all the entities share pretty much the same CRUD methods (Create, Retrieve, Update, Delete). Therefore it makes sense to use the Repository Pattern with NHibernate to make these methods available to all the entities. So if we create an IRepository interface, we could have the Repository do all the work for us as shown below:

IRepository<Employee> employees = new Repository<Employee>(sessionManager.OpenSession());
Employee employee = employees.GetById(7);
But what if we wanted to get the employees whose lastname are “smith”? Well then we are going to add to the repository an IQueryable method so that we can run custom queries through it.
public IQueryable<T> GetList(QueryBase<T> query)
return query.SatisfyingElementsFrom(Session.Linq<T>());
Of course we’ll need LINQ for NHibernate to help us out so that we can use expression trees. But this will allow us to send our custom queries through the repository now. Note that it is better to have a class for each query that we want, for example, a FindEmployeeByLastname class which would inherit from the QueryBase class to give us the desired query. This way you will not end up writing linq queries all throughout your application but rather have it in one single place so that you can easily maintain your application as it scales out.
This is the basis of how I’m going to use Fluent NHibernate in my next project and constitutes what I believe is the best for my web application through a week’s worth of research on the internet.