Using PHP-FPM with Apache 2 on CentOS

Running Apache 2 and PHP is simple with mod_php but there are more efficient alternatives like using PHP-FPM (FastCGI Process Manager) which is an alternative PHP FastCGI implementation. With it the PHP process runs standalone without the need for a web server and listens for incoming requests on either a TCP or a Unix socket. Web servers can connect the PHP process and send requests using the FastCGI protocol. It solves mod_php’s problem of spinning up and destroying PHP instances with every request and thus is more memory efficient and provides better performance.

These instructions are for CentOS 6.4 but the process should however work similarly with other Linux distributions.

Setting up the PHP-FPM

Install the FPM-CGI binary for PHP and add it to start after server reboot:

# yum install php-fpm
# chkconfig --levels 235 php-fpm on

Configure the PHP-FPM pool in /etc/php-fpm.d/www.conf to use sockets and enable some status information for e.g. Munit:

;listen = 127.0.0.1:9000
listen = /tmp/php5-fpm.sock
pm.status_path = /status
ping.path = /ping

Start the service with:

service php-fpm start

Setting up Apache and mod_fastcgi

Apache can be configured to run FastCGI with two modules: mod_fastcgi and mod_fcgid. The difference is explained at Debian bug report #504132: “mod_fcgid passes just one request to the FCGI server at a time while mod_fastcgi passes several requests at once, the latter is usually better for PHP, as PHP can manage several request using several threads and opcode caches like APC usually work only with threads and not with processes. This means that using mod_fcgid you end up having many PHP processes which all have their very own opcode cache.”

In short: mod_fastcgi is better.

Install mod_fastcgi

So we need to get mod_fastcgi which isn’t at the time found from CentOS default or EPEL repos but from RPMForge or by building it from sources.

Getting mod_fastcgi from RPMForge

Install the RPMForge repo:

# wget http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm
# rpm -ivh rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm

Add some priorities which repo to use:

# yum install yum-priorities
 
# vi /etc/yum.repos.d/epel.repo 
... add the line priority=10 to the [epel] section

Install mod_fastcgi

# yum install mod_fastcgi

Or building mod_fastcgi from sources

You can build the mod_fastcgi from sources. Make sure required packages are installed (httpd-devel and apr-devel required to compile mod_fastcgi):

# yum install libtool httpd-devel apr-devel apr

Get the latest mod_fastcgi source code:

# cd /opt
# wget http://www.fastcgi.com/dist/mod_fastcgi-current.tar.gz

Untar tar ball:

# tar -zxvf mod_fastcgi-current.tar.gz
# cd mod_fastcgi-2.4.6/

As we are using Apache 2, we make a copy of Makefile.AP2: cp Makefile.AP2 Makefile

Compile and install mod_fastcgi for 64 bit system:

# make top_dir=/usr/lib64/httpd
# make install top_dir=/usr/lib64/httpd

Configure mod_fastcgi

If you have php enabled disable it

# mv /etc/httpd/conf.d/{php.conf,php.conf.disable}

Set up a (non-existent) directory that Apache can route the requests through. That directory must be available to Apache and it might be /usr/lib/cgi-bin/ so the routed file is then e.g. /usr/lib/cgi-bin/php5-fcgi.

# mkdir /usr/lib/cgi-bin/

Configure mod_fastcgi settings in /etc/httpd/conf.d/mod_fastcgi.conf to be:

LoadModule fastcgi_module modules/mod_fastcgi.so
 
<IfModule mod_fastcgi.c>
	DirectoryIndex index.php index.html index.shtml index.cgi
	AddHandler php5-fcgi .php
	Action php5-fcgi /php5-fcgi
	Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi
	FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -socket /tmp/php5-fpm.sock -pass-header Authorization
 
	# For monitoring status with e.g. Munin
	<LocationMatch "/(ping|status)">
		SetHandler php5-fcgi-virt
		Action php5-fcgi-virt /php5-fcgi virtual
	</LocationMatch>
</IfModule>

We add handler and action which sends all requests of PHP to the virtual URL created above, which is in turn then sent to the external FastCGI server. We also add configuration to have some status information about our PHP-FPM.

Start Apache:

# service httpd start

PHP should now work.

Weblogic Server Auto Restart with Node Manager as Linux service

Sometimes servers need to reboot and then it’s nice to have certain services to start automatically. Oracle Weblogic’s Node Manager is one of them and in order to have Node Manager start automatically it must be configured as a daemon. Unfortunately Oracle doesn’t provide init scripts to run it as a Linux service but it’s pretty simple to create your own startup scripts. Just create a new nodemgr script under /etc/init.d/, add it as a service and you’re done, as Oracle Fusion Middleware -blog writes.

For example on Red Hat Enterprise Linux Server 5.6 with Oracle Weblogic Server 10.3.5 the /etc/init.d/nodemgr looks like this (edit the script to reflect your Weblogic installation path):


#!/bin/sh
#
# nodemgr Oracle Weblogic NodeManager service
#
# chkconfig:   345 85 15
# description: Oracle Weblogic NodeManager service

### BEGIN INIT INFO
# Provides: nodemgr
# Required-Start: $network $local_fs
# Required-Stop:
# Should-Start:
# Should-Stop:
# Default-Start: 3 4 5
# Default-Stop: 0 1 2 6
# Short-Description: Oracle Weblogic NodeManager service.
# Description: Starts and stops Oracle Weblogic NodeManager.
### END INIT INFO

. /etc/rc.d/init.d/functions

# Your WLS home directory (where wlserver_10.3 is)
export MW_HOME="/oracle/product/mw11g"
export JAVA_HOME="/oracle/java/jdk1.6.0_29"
DAEMON_USER="oracle"
PROCESS_STRING="^.*/oracle/product/mw11g/.*weblogic.NodeManager.*"

source $MW_HOME/wlserver_10.3/server/bin/setWLSEnv.sh > /dev/null
export NodeManagerHome="$WL_HOME/common/nodemanager"
NodeManagerLockFile="$NodeManagerHome/nodemanager.log.lck"

PROGRAM="$MW_HOME/wlserver_10.3/server/bin/startNodeManager.sh"
SERVICE_NAME=`/bin/basename $0`
LOCKFILE="/var/lock/subsys/$SERVICE_NAME"

RETVAL=0

start() {
        OLDPID=`/usr/bin/pgrep -f $PROCESS_STRING`
        if [ ! -z "$OLDPID" ]; then
            echo "$SERVICE_NAME is already running (pid $OLDPID) !"
            exit
        fi

        echo -n $"Starting $SERVICE_NAME: "
        /bin/su $DAEMON_USER -c "$PROGRAM &"

        RETVAL=$?
        echo
        [ $RETVAL -eq 0 ] && touch $LOCKFILE
}

stop() {
        echo -n $"Stopping $SERVICE_NAME: "
        OLDPID=`/usr/bin/pgrep -f $PROCESS_STRING`
        if [ "$OLDPID" != "" ]; then
            /bin/kill -TERM $OLDPID
        else
            /bin/echo "$SERVICE_NAME is stopped"
        fi
        echo
        /bin/rm -f $NodeManagerLockFile
        [ $RETVAL -eq 0 ] && rm -f $LOCKFILE

}

restart() {
        stop
        sleep 10
        start
}

case "$1" in
  start)
        start
        ;;
  stop)
        stop
        ;;
  restart|force-reload|reload)
        restart
        ;;
  condrestart|try-restart)
        [ -f $LOCKFILE ] && restart
        ;;
  status)
        OLDPID=`/usr/bin/pgrep -f $PROCESS_STRING`
        if [ "$OLDPID" != "" ]; then
            /bin/echo "$SERVICE_NAME is running (pid: $OLDPID)"
        else
            /bin/echo "$SERVICE_NAME is stopped"
        fi
        RETVAL=$?
        ;;
  *)
        echo $"Usage: $0 {start|stop|status|restart|reload|force-reload|condrestart}"
        exit 1
esac

exit $RETVAL


Add the Node Manager to start after server reboot:

# chmod +x /etc/init.d/nodemgr
# chkconfig --add nodemgr
# chkconfig --list
nodemgr         0:off   1:off   2:off   3:on    4:on    5:on    6:off

Also now the Node Manager can be controlled via the service command (e.g. service nodemgr restart).

When you have the Node Manager restarting automatically after a system reboot, you can also have Weblogic managed servers automatically restarted by Node Manager. Managed servers will be restarted only if they were running at the time the shutdown was issued. Just activate the Auto Restart option in the Administration Console (Environment > Servers > selected server > Health Monitoring) and you might also need to set the CrashRecoveryEnabled to “true” in $WL_HOME/wlserver_10.3/common/nodemanager/nodemanager.properties.

With little scripting and configuration your sysadmin tasks have now become a little easier.

Running FishEye & Crucible as a service in Linux

Atlassian’s tools for supporting software development are great but they aren’t really admin friendly to start with. For example FishEye & Crucible doesn’t ship with scripts to start it at system boot time but with the help of Atlassian’s Wiki, sysadmin tasks and scripts you can run it as a normal service. First we create a dedicated user for crucible and second we add a new service for it. I have done this on CentOS 5.7 x86_64.

Setting up the service account

As the root user, create a separate “FishEye & Crucible” service account at root shell:

# useradd -c "FishEye & Crucible service account" -d /home/crucible -m crucible

To make it easier for this to work also after FishEye & Crucible upgrades we create a symbolic link to the latest version (modify “/opt/fecru” to match your deployment).

# ln -s /opt/fecru/fecru-2.7.15 /opt/fecru/latest

Then, ensure that this user is the filesystem owner of the FishEye & Crucible instance (modify “/opt/fecru” to match your deployment).

# chown -R crucible:crucible /opt/fecru

Running Crucible as a crucible user

Save the following script to /etc/init.d/crucible. Be sure to edit the FISHEYE_HOME value to the location where your FishEye/Crucible instance resides:

#!/bin/bash
# RUN_AS: The user to run fisheye & crucible as. Its recommended that you create a separate user account for security reasons
RUN_AS=crucible
 
# FISHEYE_HOME: The path to the FishEye & Crucible installation. Its recommended to create a symbolic link to the latest version so the process will still work after upgrades.
FISHEYE_HOME="/opt/fecru/latest"
# FISHEYE_INST: The path where the data itself will be stored.
export FISHEYE_INST="/opt/fecru/fecru-data"

fisheyectl() {
        if [ "x$USER" != "x$RUN_AS" ]; then
                # If running without FISHEYE_INST
                # su - "$RUN_AS" -c "$FISHEYE_HOME/bin/fisheyectl.sh $1"
                su - "$RUN_AS" -c "FISHEYE_INST=$FISHEYE_INST $FISHEYE_HOME/bin/fisheyectl.sh $1"
        else
                "$FISHEYE_HOME/bin/fisheyectl.sh $1"
        fi
} 

case "$1" in
        start)
                fisheyectl start
                ;;
        stop)
                fisheyectl stop
                ;;
        restart)
                fisheyectl stop
                sleep 10
                fisheyectl start
                ;;
        *)
                echo "Usage: $0 {start|stop|restart}"
esac
 
exit 0

After saving the script, modify it’s permissions so that it can be executed:

# chmod 755 /etc/init.d/crucible

Running Crucible as a service

Now that we have an init script we can add it as a service and be able to configure the system to run the script on startup (more precisely, ensure that Crucible runs in runlevels 3, 4 and 5):

chkconfig --add crucible
chkconfig crucible on

Verify that the script has been installed correctly:

# chkconfig --list crucible

After this has been done you can manually start or stop the service by using these commands:

service crucible stop
service crucible start

And you’re done.

Using CAcert.org signed certificates for TLS

Setting up Transport Layer Security (TLS), or as previously known as Secure Sockets Layer (SSL), for Apache, Postfix and IMAP like Dovecot is fairly easy. You just need some digital certificates and configuration. If you don’t want to pay for certificates from trusted sources like Thawte or you just don’t need that kind of trust (for development purposes), you can always produce your own certificates. But there is also a middle way: using CAcert.org signed certificates.

Background
Wikipedia tells us that CAcert.org is a community-driven certificate authority that issues free public key certificates. CAcert automatically signs certificates for email addresses controlled by the requester and for domains for which certain addresses (such as “hostmaster@example.com”) are controlled by the requester. Thus it operates as a robot certificate authority. CAcert certificates can be used like any other SSL certificates although they are considered weak because CAcert does not emit any information in the certificates other than the domain name or email address. To create higher-trust certificates, users can participate in a web of trust system whereby users physically meet and verify each other’s identities. They are also not as useful in web browsers as certificates issued by commercial CAs such as VeriSign, because most installed web browsers do not distribute CAcert’s root certificate. Thus, for most web users, a certificate signed by CAcert behaves like a self-signed certificate.

Generating Certificates
The procedure to sign your certificate at CAcert is rather simple. This guide assumes that the certificates are in /etc/ssl/cacert/ and you are as root.

0. Join CAcert.org and fill in your details. After email verification and login, add domain and service will try to verify that you can read mail on one of following accounts: root, hostmaster, postmaster, admin, webmaster or email addresses that can be found on whois data of domain that you provided.

1. Generate a private key that is not file encrypted:

openssl genrsa -out domainname.key 1024
chown root:root domainname.key
chmod 0400 domainname.key

Private keys should belong to “root” and be readable only by root.

You could also create a private key that is encrypted: openssl genrsa -des3 -out domainname.key 1024

2. Create a CSR with the RSA private key (output will be PEM format). Do not enter extra attributes at the prompt and leave the challenge password blank (press enter):

openssl req -new -key domainname.key -out domainname.csr

3. Verify the contents of the CSR or private key:

openssl req -noout -text -in domainname.csr
openssl rsa -noout -text -in domainname.key

4. Send your public key to be signed by and request new server certificate from CAcert.org web site (Class 1 certificate). When you are asked for CSR paste content of domainname.csr. It should look like this:

-----BEGIN CERTIFICATE REQUEST-----
MIIB3TCCAUYCAQAwgZwxCzAJBgNVBAYTAkZJMRAwDgYDVQQIEwdVdXNpbWFhMQ8w
...clip...
MQ==
-----END CERTIFICATE REQUEST-----

You can verify the content of request before sending it

openssl req -in domainname.csr -text -verify -noout

5. Copy the Server Certificate from the CAcert.org webpage and put it in domainname.crt file and add permissions.

chmod a=r domainname.crt

Check at least the contents of Validity and Subject fields:

openssl x509 -in domainname.crt -text -noout

6. Get CAcert.org root certificate

wget -nv https://www.cacert.org/certs/root.crt -O cacert-org.crt
chmod a=r cacert-org.crt

Check the contents:

openssl x509 -in cacert-org.crt -text -noout

After that you’re ready to configure your services like Apache, Postfix and Dovecot to use the new certificate. Read about it later.

WordPress mod_rewrite rules taking over mod_status and mod_info

After moving Rule of Tech to a new server and setting up monitoring I noticed that server-status and server-info Apache modules weren’t working as expected. As usual a little bit of Googling solved this problem.

The problem was that the .htaccess rules in WordPress were taking over non-existing server-info and server-status urls given in Apache’s config and were returning a page not found error. The rewrite rules by WordPress were setup to handle all the permalinks on the site and for any non-existing file send it to index.php. It really wasn’t a WordPress problem and should happen with any application that uses the same type of catch-all rewrite rules to handle all the urls inside the application.

The solution was to specifically add a rewrite rule to not have the server-status and server-info urls processed by adding a rule like: RewriteCond %{REQUEST_URI} !=/server-status. The other way is to stop the rewriting process when the urls are found by adding a rule like: RewriteRule ^(server-info|server-status) - [L].

The WordPress rewrite rules should look like this:

# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
# server info and status
RewriteRule ^(server-info|server-status) - [L]
# RewriteCond %{REQUEST_URI} !=/server-status
# /server info and status
RewriteCond %{REQUEST_FILENAME} -f 
RewriteCond %{REQUEST_FILENAME} -d
RewriteRule . index.php [L]
</IfModule>
# END WordPress

Installing Sun JDK 1.6 on CentOS

CentOS doesn’t have a package for Sun JDK so it has to installed manually. It’s fairly easy but there are some steps to do that. This guide has been tested on CentOS 5.4 x64_86.

Step 1. Initial setup for building RPM
-!- Do this with a non-root user

  1. Create ~/.rpmmacros
    • $ vim ~/.rpmmacros
      %_topdir /home//rpmbuild
      %_tmppath %{_topdir}/tmp
      
  2. Create needed folders:
    • $ mkdir -p ~/rpmbuild/{SOURCES,SRPMS,SPECS,RPMS,tmp,BUILD}
      
  3. Build environment needs to be complete. Some needed packages are:
    • $ sudo yum install -y rpm-build gcc gcc-c++ redhat-rpm-config
      

Step 2. Installing your favorite JDK

  1. Download Sun JDK 1.6 update 14 from Sun Java download or the Sun JDK archive.
    • Choose the correct platform (for me it’s Linux x64) and download jdk-6u18-linux-x64-rpm.bin
  2. Give it executable rights: $ chmod 755 jdk-6u18-linux-x64-rpm.bin
  3. Run the binary to extract it into RPM form: $ ./jdk-6u18-linux-x64-rpm.bin
  4. Install it:
    • $ sudo rpm -Uvh jdk-6u18-linux-amd64.rpm
      
  5. Log out and in again to make the changes in the paths take effect
  6. Check the install
    • $ java -version
      java version "1.6.0_18"
      Java(TM) SE Runtime Environment (build 1.6.0_18-b07)
      Java HotSpot(TM) 64-Bit Server VM (build 16.0-b13, mixed mode)
      
  7. Java is now installed on /usr/bin/java