7. cronjobs
• schedule task ( crontab -u username -e )
* * * * * command to be executed
| | | | |
| | | | +----- day of week (0 - 6) (Sunday=0)
| | | +------- month (1 - 12)
| | +--------- day of month (1 - 31)
| +----------- hour (0 - 23)
+------------- min (0 - 59)
• define constants, access via $_SERVER[‘name’]
NAME = VALUE
8. cronjobs
; define constants
APPLICATION_ENVIRONMENT = development
DB_SERVER = db1.somehost.com
; run every day at 4:30
30 4 * * * /path/to/somescript.php -daily
; run every 5 minutes
*/5 * * * * /path/to/somescript.php
; run every monday and thursday, at midnight and 9AM
* 0,9 * * 1,4 /path/to/somescript
10. the basics : argc | argv
• $argc = number of
arguments
• $argv = array of
arguments
• first element = script
filename
photo by Kore Nordmann - http://kore-nordmann.de/photos/misc/full/elephpant_39.html
11. the basics : argc | argv
• $argc = number of // Number of arguments
echo '$argc = ';
arguments echo $argc . PHP_EOL;
// Array of arguments
• $argv = array of echo '$argv = ';
print_r($argv);
arguments
• first element = script
filename
14. getopt ( )
• arguments are parsed
• no validation
• short options
• long options since 5.3
• no, optional or
required values
photo by Martha de Jong-Lantink - http://www.fotopedia.com/wiki/Emperor_Penguin#!/items/flickr-2080338469
15. // Define short options
$short = "hv";
$short.= "u:"; // required
getopt ( )
$short.= "p::"; // optional
// Define long options
$long = array(
"help", "verbose",
• arguments are parsed "user:", // required
"passwd::", // optional
);
• no validation
// Get the options (php 5.3)
• short options $options = getopt(
$shortopts,
$longopts
• long options since 5.3 );
• no, optional or // Show it
print_r($options);
required values
40. steps to create a daemon
• Fork off the parent process
• Change the file mode mask
41. steps to create a daemon
• Fork off the parent process
• Change the file mode mask
• Open any logs for writing
42. steps to create a daemon
• Fork off the parent process
• Change the file mode mask
• Open any logs for writing
• Create a new session id & detach current session
43. steps to create a daemon
• Fork off the parent process
• Change the file mode mask
• Open any logs for writing
• Create a new session id & detach current session
• Change the current working directory
44. steps to create a daemon
• Fork off the parent process
• Change the file mode mask
• Open any logs for writing
• Create a new session id & detach current session
• Change the current working directory
• Close standard file descriptors
45. fork off the parent process
• pcntl_fork ( )
• creates copy of parent
• different pid & ppid
• [pid] : parent process
• 0 : child process
• -1 : error forking
photo by Conan (conanil) - http://www.flickr.com/photos/conanil/1215118030/
46. fork off the parent process
• pcntl_fork ( ) // We fork the script
$pid = pcntl_fork();
• creates copy of parent // We add our daemon &
// child code
• different pid & ppid if ($pid == -1) {
die('Could not fork!');
• [pid] : parent process } elseif ($pid) {
// we are the child
} else {
• 0 : child process // we are the parent
}
• -1 : error forking
47. change the file mode mask
• umask (mask)
• mask: octal notation
• system umask & mask
• revoke permissions
• 0: reset to system mask
• dirs: 0777, files: 0666
photo by The Mad Hatter - http://www.lauraleeburch.com/blog/2010/05/the-mad-hatter-2/
49. Open any logs for writing
• no feedback via shell
• debug info via logs
• logging via
• database
• logfiles
• syslogd
photo by Miranda Hine - http://www.flickr.com/photos/mirandahine/5500665022
50. Open any logs for writing
• no feedback via shell // open syslog wth processID
openlog('SyslogTest',
LOG_PID | LOG_PERROR,
• debug info via logs LOG_LOCAL0);
• logging via $access = date('H:i:s');
syslog(LOG_WARNING, 'Data
• database was accessed @ ' . $access);
closelog();
• logfiles
// Oct 20 15:36:12 amazium
• syslogd SyslogTest[11785]: Data was
accessed @ 15:36:12
51. Create a new session id
• acquire unique SID
• avoid system orphans
• posix_setsid ( )
• sid on success
• -1 on error
photo by Notch Brewing - http://www.notchsession.com
52. Create a new session id
• acquire unique SID // Forked, client part
// let's detach
• avoid system orphans $sid = posix_setsid();
• posix_setsid ( ) // Die on failure to detach
if ($sid < 0) {
• sid on success die('Could not detach
session id.');
}
• -1 on error
53. Change current working dir
• daemon can be started
from any location
• mounts can disappear
• change cwd to a safe
location
• getcwd ( )
• chdir ( )
photo by Majeq - http://majeq.deviantart.com/art/Broken-Bridge-Speedy-176766406
54. Close standard file descriptors
• STDIN, STDOUT, STDERR
• inherited from parent
• unknown targets
• i.e. output still to shell
• close & reconnect file
descriptors with fclose
& fopen
photo : istockphoto
69. signals
• standard vs real-time
• communication
• alerts
• timers
• signal handling
photo by Majeq - http://majeq.deviantart.com/art/Broken-Bridge-Speedy-176766406
106. supervisord
• process control system [program:gearman_tika]
command=/path/to/script.php
• simple autostart=true
autorestart=true
• centralized logfile=/var/log/myscript.log
• efficient
• extensible
• http://supervisord.org
107. monit
• manage & monitor
processes, files,
directories & devices
• restart failed daemons
• control file with
service entries or
checks
108. check process processQueues with pidfile "/var/
run/amazium/processQueues.pid"
start = “/etc/init.d/processQueues start"
stop = "/etc/init.d/processQueues stop"
if does not exist then restart
if cpu usage is greater than 60 percent for
2 cycles then alert
if cpu usage > 98% for 5 cycles then restart
if 2 restarts within 3 cycles then timeout
alert foo@bar.baz
109.
110.
111. daemontools
• collection of tools to
manage services
• supervise : monitors a
service
• requires run script
with code to start
daemon
112. zombies
• defunct process
• child has finished but
is still in process table
• reaper for SIGCHLD
• pcntl_signal(SIGCHLD,
SIG_IGN);
wallpaper from Pozadia - http://dark.pozadia.org/wallpaper/Dawn-of-the-Zombies/
113. // Reaper to clean up zombies
function reaper($signal)
{
$pid = pcntl_waitpid(-1,
zombies
$status,
WNOHANG);
if ($pid == -1) {
// No child waiting.
} else {
if (pcntl_wifexited($signal)) {
echo 'Process $pid exited';
• defunct process
} else {
echo 'False alarm on $pid';
}
• child has finished but
// Check if more children ended
reaper($signal);
is still in process table }
pcntl_signal(SIGCHLD, 'reaper');
}
• exec wait system call // Install signal handler on SIGCHLD
pcntl_signal(SIGCHLD, 'reaper');
• reaper for SIGCHLD
// If there is no need to know when a
• pcntl_signal(SIGCHLD, // child has finished, you don’t need
// to use the reaper, use SIG_IGN:
SIG_IGN); pcntl_signal(SIGCHLD, SIG_IGN);
123. Apache ActiveMQ
• middle ground
• broker architecture
• p2p architecture
• easier to implement
• less performant
124. Apache hadoop
• process large datasets
• MapReduce
• local processing &
storage
• failures detection at
application level
• http://vimeo.com/20955076
125. need more information on distribution?
• see workshop “Think like an ant, distribute the
workload” by Helgi Þormar Þorbjörnsson
• Video : http://vimeo.com/41013062
• Slides : http://www.slideshare.net/helgith/scale-
like-an-ant-distribute-the-workload-dpc-
amsterdam-2011
126. please rate my talk
https://joind.in/6226
twitter : @jkeppens
blog : blog.amazium.com
Notas do Editor
\n
\n
\n
When writing web applications, most of the action happens in a web context. But sometimes you need to support your application with scripts that run in the background.\nSome tasks can be generating reports, performing maintenance, loading external content, aggregating or analyzing data, sending out mass mailing, and much more.\nScripts performing these tasks, aren&#x2019;t run via the browser. PHP CLI, short for Command Line Interface, is a special SAPI or Server API that allows you to run php scripts on the command line.\n
The Server API or SAPI is responsible for coordinating the PHP lifecycle. You can look at it as the bridge between the web server or command line and php.\n\nThe sapi passes requests to PHP core, which will handle the requests, but is also responsible for low-level operations like file streams, error handling, etc...\n\nNext to this, we find the Zend Engine that parses and compiles the scripts we write and executes them in the Virtual Machine. \n\nAt times, Zend Engine hands over control to the extension layer where the php extension inject new functionality into php.\n
When creating php scripts to run on the command line, you loose all functionality related to the web context. \n\nThis is mostly reflected in the php globals. $_GET and $_POST aren&#x2019;t available anymore. $_SERVER is still there but is missing all web-related values. It did get terminal related information which can be quite useful at times.\n
While you can run your scripts manually, one of the typical uses of the php on the command line is via the cron. Using cron, you can schedule when a specific script has to be executed.\n\nIt is possible to define constants which will be accessible in the $_SERVER global. We typically use this to pass on the application environment so the code knows which config to load from the ini files. \n
At the top of this slide, you can see that I defined 2 constants.\n\nI also configured 3 scripts to run at specific times. I&#x2019;m not going into this now, but you can always ask me after the presentation if you want more information.\n
One of the key differences between web and command line scripts is the way they handle input/output. As said earlier, you don&#x2019;t have access to the request globals, but your command line is also not able to display html in a nice way.\n
The simplest way to do input, is using params that are passed when executing the script. 2 globals help you out with this, the first is $argc which holds the number of arguments and the other is $argv which is an array containing every argument.\n\nThe first element of the array is always the filename of the php script. As a result $argc is always 1 or bigger.\n\nYou can also find the argc and argv in the $_SERVER global array.\n
So if we have a quick look at this script ...\n
This gives following output. As you can see nothing is linked, it&#x2019;s just an array with a bunch of values.\n\nWhile this might be good for very simple things, you usually wants something more.\n
PHP has an implementation of the GNU getopt functions. Getopt allows you to parse arguments. Before php 5.3 only short options were possible, but since that, long options are also available.\n\nPer option you can define whether it has no value, an optional value or a required value. Whenever a value is optional, it needs to be attached to the short option, otherwise getopt will not be able to link it to the option.\n\nThere is no validation of the values entered and even in cases where you do something wrong, you don&#x2019;t always receive an error.\n
In this example you can see I defined the short options in a string and the long options in an array. I then pass this to getopt and receive an array of options.\n
Let&#x2019;s run this code. \n\nIn the first example, we provide a value for -u (which is required), and a value for -p (which is optional). As you can see, the value passwd is attached to the short option -p. The resulting array looks like we would expect it.\n\nLet&#x2019;s mess things up a bit.\n\nIn the second example, I leave out the required value for -u and detach the value passwd for the short option -p. We don&#x2019;t get an error, but a result that looks like this. \n\nAs you can see, -u has a value, which is the first argument following the option being -u. -p has no value, and the string &#x201C;passwd&#x201D; has been ignored.\n\nSo it&#x2019;s already better than argv and argc, but there is still room for improvement.\n
Let&#x2019;s run this code. \n\nIn the first example, we provide a value for -u (which is required), and a value for -p (which is optional). As you can see, the value passwd is attached to the short option -p. The resulting array looks like we would expect it.\n\nLet&#x2019;s mess things up a bit.\n\nIn the second example, I leave out the required value for -u and detach the value passwd for the short option -p. We don&#x2019;t get an error, but a result that looks like this. \n\nAs you can see, -u has a value, which is the first argument following the option being -u. -p has no value, and the string &#x201C;passwd&#x201D; has been ignored.\n\nSo it&#x2019;s already better than argv and argc, but there is still room for improvement.\n
Let&#x2019;s run this code. \n\nIn the first example, we provide a value for -u (which is required), and a value for -p (which is optional). As you can see, the value passwd is attached to the short option -p. The resulting array looks like we would expect it.\n\nLet&#x2019;s mess things up a bit.\n\nIn the second example, I leave out the required value for -u and detach the value passwd for the short option -p. We don&#x2019;t get an error, but a result that looks like this. \n\nAs you can see, -u has a value, which is the first argument following the option being -u. -p has no value, and the string &#x201C;passwd&#x201D; has been ignored.\n\nSo it&#x2019;s already better than argv and argc, but there is still room for improvement.\n
\n
\n
Pear, the ConsoleTools from EZ Components and Zend Framework each have there implementation of GetOpt. Personally I like the Zend Framework implementation best. The Zend_Console_Getopt class is a little gem. \n\nIt supports short and long options, which are linked. So it knows that -u and --user are the same thing. You can define a help message for each option and you can set whether value is required, optional or forbidden.\n\nThe class provides the getUsageMessage method which will generate a usage message based on the help messages you provided.\n\nAfter parsing the options, the options are available as properties. The properties are available with there short and long option name. These are aliases for each other, so even if you used -u with a value, you can access it on the return object using the user property.\n\nZend_Console_Getopt will throw exceptions in case of issues, so you can for instance show the usage message when this occurs. There are some extra features and if you need to know more, I suggest you check out the class reference on Zend Framework.\n\nLets have a look at a little code example.\n
In contrast to the example for the basic getopt, you can see that the options are now linked together and that we provided a help message for each. This config array is passed when creating a new instance of Zend_Console_Getopt.\n\nAfter calling the parse method, we can now access the options via properties on the object. You can see we access help, user, password, but also v the short for verbose. When we get an exception or if the help option is provided, we display the usage message.\n\nWhen you run this code, it behaves as expected. But let&#x2019;s have a quick look at the usage message, which is a nice feature.\n
\n
While command line arguments are one way to get input, it&#x2019;s not always what you want. One of the strengths of the console is that you can interact with your user. It is possible to fiddle with input streams, but it&#x2019;s not required.\n\nOn linux the GNU readline library does just that. It allows you to ask for information and read in what the customer provided. It has built in support for autocompletion and command history.\n\nIf you need interleaving of IO and user input, there is also support for callback handlers in combination with advanced stream stuff. But I haven&#x2019;t tried that out yet myself. :)\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
Pear, Zend Framework and the consoletools of EZ Components all provide functionality that allow you to make the output of your scripts a bit more attractive.\n\nThey allow for text formatting which includes coloring text and background, put text in bold or italics, underline texts and even supports blinking!\n\nThere is also support for progress bars and conversion of arrays into tables, including callback functionality on the table columns.\n\nI&#x2019;ve added some examples from the pear console classes\n
If you want to really go overboard, there is ncurses. Ncurses is also an implementation of a gnu library and allows you to create windows, it supports input from the mouse and keyboard, coloring, and much more.\n\nOne of the disadvantages of ncurses is that it&#x2019;s documentation is really bad. If you want to use this, you might have to look at documentation of ncurses implementations in other languages or the linux manpages.\n\nJoshua Thijssen gave me an interesting tip about a linux command line tool called whiptail. The tool creates similar interfaces and it takes a lot of the work away from you. You can execute this from php, capture the return value and use this back in your script.\n
Sometimes you need more. When I worked for Roulette69 we had a background process which analyzed real-time the games that were played, generated statistics and put those in memcache where they were picked up and showed to all players that were online.\n\nThe script responsible for this was a daemon. A daemon or service is basically a background process designed to run autonomously, with little or none user interaction.\n\nThe name has it&#x2019;s origins in Greek mythology: daemons were neither good or evil, they were actually little spirits that did useful things for mankind.\n\nThe first time I created a daemon, I simply wrote a php script with an endless loop in it. Then I called it with an ampersand after the command, and it was sent to the background. It was easy.\n\nBut it was also bad. When you need a daemon, there are a couple of things you need to do to make sure it runs smoothly.\n
Those 6 steps are:\n\n- Fork off the parent process\n- Change the file mode mask\n- Open any logs for writing\n- Create a new session id & detach current session\n- Change the current working directory\n- Close standard file descriptors\n\nAfter you have taken care of this, you can add the payload, the code you actually want to execute.\n\nSo, what does this all mean?\n
Those 6 steps are:\n\n- Fork off the parent process\n- Change the file mode mask\n- Open any logs for writing\n- Create a new session id & detach current session\n- Change the current working directory\n- Close standard file descriptors\n\nAfter you have taken care of this, you can add the payload, the code you actually want to execute.\n\nSo, what does this all mean?\n
Those 6 steps are:\n\n- Fork off the parent process\n- Change the file mode mask\n- Open any logs for writing\n- Create a new session id & detach current session\n- Change the current working directory\n- Close standard file descriptors\n\nAfter you have taken care of this, you can add the payload, the code you actually want to execute.\n\nSo, what does this all mean?\n
Those 6 steps are:\n\n- Fork off the parent process\n- Change the file mode mask\n- Open any logs for writing\n- Create a new session id & detach current session\n- Change the current working directory\n- Close standard file descriptors\n\nAfter you have taken care of this, you can add the payload, the code you actually want to execute.\n\nSo, what does this all mean?\n
Those 6 steps are:\n\n- Fork off the parent process\n- Change the file mode mask\n- Open any logs for writing\n- Create a new session id & detach current session\n- Change the current working directory\n- Close standard file descriptors\n\nAfter you have taken care of this, you can add the payload, the code you actually want to execute.\n\nSo, what does this all mean?\n
Those 6 steps are:\n\n- Fork off the parent process\n- Change the file mode mask\n- Open any logs for writing\n- Create a new session id & detach current session\n- Change the current working directory\n- Close standard file descriptors\n\nAfter you have taken care of this, you can add the payload, the code you actually want to execute.\n\nSo, what does this all mean?\n
Step one is forking of the parent process. A daemon can be started by the system itself or by a user on the terminal. When it is started, it behaves like an other executable on the system. To make it run autonomously, we must detach it from where it was started. You do this by creating a child process where the actual code is executed. This is known as forking.\nWhen you fork, you create a full copy of the original process. The original is called the parent, the copy the child. The only way they differ is in there process id (or pid) and their parent id (or ppid). \nThis also means that all variables initiated in the parent before the fork, are also available as is in the child&#x2019;s thread of execution. This can lead to some unexpected and unwanted behaviours. For this reason, you always have to code as defensively as possible when working with daemons and do tons of error checking.\n
When forking, we can have 3 return values:\n\nOn success, the PID of the child process is returned in the parent&#x2019;s and 0 in the child&#x2019;s thread of execution.\n\nOn failure, -1 is returned in the parent&#x2019;s context, no child process will be created and a php error is raised.\n
Our child process is a clone of the parent process up till the point of the fork. This means that amongst other things, we also inherited the umask of the parent. \n\nThe umask or user file creation mask limits the default permissions of newly created files and folders. The default permissions are 0777 (which stands for read/write/execute for all) on directories and 0666 (or read/write for all) on files. The system will typically set the umask to 0022. This means that it takes away the write access for group and other.\n\nThe child has no idea what the umask is set to, so it&#x2019;s always good to reset it by using umask(0), even if we don&#x2019;t plan to use it, so the daemon can write files (including logs) that receive the proper permissions.\n
\n
Since we don&#x2019;t receive any feedback from the command line, we need an alternative: logging. This allows you to follow what is going on.\n\nLogging can happen to the database, to files or even using syslog.\n\n
Syslog sends your log messages to a system wide logger, where they can be configured to be written to a file, send to a network server or filtered away entirely.\n\nI included a quick example for reference, but I&#x2019;m not going to go into this right now.\n
Each process on a unix or linux system is a member of a process group or session. The id of each group is the process id of it&#x2019;s owner. \n\nAfter forking, the child inherits the process group of the parent. The child&#x2019;s parent process id is equal to the parent&#x2019;s process id.\n\nSince the parent is going to exit, the child needs to create its own process group and become its own process leader, otherwise it will become an orphan in the system.\n
In php we detach our session using posix_setsid. Returns the new session_id on success or -1 on errors.\n
You can already guess it. Our child also inherited the working directory of the parent.\n\nThe working directory could be a network mount, a removable drive or somewhere the administrator may want to unmount at some point.\n\nTo unmount any of these the system will have to kill any processes still using them, which would be unfortunate for our daemon.\n\nFor this reason we set our working directory to the root directory, which we are sure will always exist and can&#x2019;t be unmounted.\n
Since we detached the child from the terminal, it can&#x2019;t interact with the user directly. As a consequence it has no use for the standard file descriptors STDIN, STDOUT and STDERR.\n\nAs with everything else, the file descriptors are inherited from the parent. The child has no idea what they are connected to. So we close the file descriptors. \n\nIf you don&#x2019;t do this and you still have your terminal open after launching the daemon, you might get unwanted output from it at times.\n
One of the cool things about the file descriptors is that after you have closed them, the system will reattach them to the firstly opened resources.\n\nThere is still little use to connect STDIN, so we point it to read from /dev/null.\n\nOn the other hand we reconnect the STDOUT & STDERR to logfiles. Whenever you echo something to the screen, it will be written to the logfile.\n\n
Let&#x2019;s put it all together.\n
Let&#x2019;s put it all together.\n
Let&#x2019;s put it all together.\n
Let&#x2019;s put it all together.\n
Let&#x2019;s put it all together.\n
Let&#x2019;s put it all together.\n
Let&#x2019;s put it all together.\n
Let&#x2019;s put it all together.\n
Let&#x2019;s put it all together.\n
Let&#x2019;s put it all together.\n
Let&#x2019;s put it all together.\n
Let&#x2019;s put it all together.\n
Let&#x2019;s put it all together.\n
Let&#x2019;s put it all together.\n
One of the things you need to keep in mind when writing daemons, is that you&#x2019;re in it for the long run. Since php is typically used for short scripts, it doesn&#x2019;t usually garbage collect during execution, but when the script has finished. This is problematic for daemons and can lead to a memory udage build up.\n\nBefore php 5.3 there wasn&#x2019;t much you could do. Since then we got circular reference garbage collect which can make our lives a little easier. To get this to work, you need to decrease the reference count to chunks of memory by setting variables to null or by unsetting them.\n\nOnce in a while you should run gc_collect_cycles in your while loop to take out the trash. Don&#x2019;t do this too often though.\n\nAnother thing to keep in mind is that php generates file statistics in the cache whenever it uses file functions. If you perform a lot of file operations on the same files in different runs of your loop, you will work on cached information instead of real-time information. If your daemon is running for a long time, this might ba problem, so you should run clearstatcache in regular intervals.\n
One of the things you need to keep in mind when writing daemons, is that you&#x2019;re in it for the long run. Since php is typically used for short scripts, it doesn&#x2019;t usually garbage collect during execution, but when the script has finished. This is problematic for daemons and can lead to a memory udage build up.\n\nBefore php 5.3 there wasn&#x2019;t much you could do. Since then we got circular reference garbage collect which can make our lives a little easier. To get this to work, you need to decrease the reference count to chunks of memory by setting variables to null or by unsetting them.\n\nOnce in a while you should run gc_collect_cycles in your while loop to take out the trash. Don&#x2019;t do this too often though.\n\nAnother thing to keep in mind is that php generates file statistics in the cache whenever it uses file functions. If you perform a lot of file operations on the same files in different runs of your loop, you will work on cached information instead of real-time information. If your daemon is running for a long time, this might ba problem, so you should run clearstatcache in regular intervals.\n
Let&#x2019;s put it all together.\n
IPC stands for Inter-process Communication.\nThis technique allows the processes to communicate with each another.\nSince each process has its own address space and unique user space, how does the process communicate each other?\nThe answer is Kernel, the heart of the Linux operating system that has access to the whole memory. So we can request the kernel to allocate the space which can be used to communicate between processes.The process can also communicate by having a file accessible to both the processes. Processes can open, and read/write the file, which requires lot of I/O operation that consumes time.\nDifferent Types of IPCS\nThere are various IPC&#x2019;s which allows a process to communicate with another processes, either in the same computer or different computer in the same network.\nPipes &#x2013; Provides a way for processes to communicate with each another by exchanging messages. Named pipes provide a way for processes running on different computer systems to communicate over the network.\nShared Memory &#x2013; Processes can exchange values in the shared memory. One process will create a portion of memory which other process can access.\nMessage Queue &#x2013; It is a structured and ordered list of memory segments where processes store or retrieve data.\nSemaphores &#x2013; Provides a synchronizing mechanism for processes that are accessing the same resource. No data is passed with a semaphore; it simply coordinates access to shared resources.\n
Sometimes you will need to communicate with a daemon process. One way to do so is by sending &#x201C;signals&#x201D;. There are a number of different signals you can send, some with a specific meaning, others interpreted by the application.\n\n
To stop a process you can use SIGTERM and SIGKILL. Sigterm is the polite way to kill a script: you can catch it and end your daemon gracefully. You can&#x2019;t catch sigkill.\nSIGHUP is typically a signal you send if you want the daemon to reinitialize (cfr. reloading logs).\nSIGINT typically gets triggered on system shutdown\nSIGUSR1 is typically a request to dump states to syslog\nSend signals from your script using posix_kill( $pid, $signal )\n
To stop a process you can use SIGTERM and SIGKILL. Sigterm is the polite way to kill a script: you can catch it and end your daemon gracefully. You can&#x2019;t catch sigkill.\nSIGHUP is typically a signal you send if you want the daemon to reinitialize (cfr. reloading logs).\nSIGINT typically gets triggered on system shutdown\nSIGUSR1 is typically a request to dump states to syslog\nSend signals from your script using posix_kill( $pid, $signal )\n
To stop a process you can use SIGTERM and SIGKILL. Sigterm is the polite way to kill a script: you can catch it and end your daemon gracefully. You can&#x2019;t catch sigkill.\nSIGHUP is typically a signal you send if you want the daemon to reinitialize (cfr. reloading logs).\nSIGINT typically gets triggered on system shutdown\nSIGUSR1 is typically a request to dump states to syslog\nSend signals from your script using posix_kill( $pid, $signal )\n
\n
\n
Socket pairs provide a way to do bi-directional communication. It uses the socket_* functions in php.\n\nThe messaging functions may be used to send and receive messages to/from other processes. They provide a simple and effective means of exchanging data between processes, without the need for setting up an alternative using Unix domain sockets. \n\nmsg_* underneath semaphore functions in php docs.\n\n
Socket pairs provide a way to do bi-directional communication. It uses the socket_* functions in php.\n\nThe messaging functions may be used to send and receive messages to/from other processes. They provide a simple and effective means of exchanging data between processes, without the need for setting up an alternative using Unix domain sockets. \n\nmsg_* underneath semaphore functions in php docs.\n\n
http://www.thegeekstuff.com/2010/08/ipcs-command-examples/\n\nipcs is a UNIX / Linux command, which is used to list the information about the inter-process communication ipcs command provides a report on System V IPCS (Message queue, Semaphore, and Shared memory).\n\nShow:\n\nipcs -a : all facilities\nipcs -q : message queues\nipcs -m : memory segments\nipcs -s : semaphores\nipcs -m -i SHMID : detailed info for memory segment with SHMID\nipcs -l : lists system limits foreach ipc facility\nipcs -m -c : lists creator userid and groupid and owner userid and group id.\nipcs -m -p : displays creator id, and process id which accessed the corresponding ipc facility very recently.\n\n
Socket pairs provide a way to do bi-directional communication. It uses the socket_* functions in php.\n\nThe messaging functions may be used to send and receive messages to/from other processes. They provide a simple and effective means of exchanging data between processes, without the need for setting up an alternative using Unix domain sockets. \n\nmsg_* underneath semaphore functions in php docs.\n\n
http://www.thegeekstuff.com/2010/08/ipcs-command-examples/\n\nipcs is a UNIX / Linux command, which is used to list the information about the inter-process communication ipcs command provides a report on System V IPCS (Message queue, Semaphore, and Shared memory).\n\nShow:\n\nipcs -a : all facilities\nipcs -q : message queues\nipcs -m : memory segments\nipcs -s : semaphores\nipcs -m -i SHMID : detailed info for memory segment with SHMID\nipcs -l : lists system limits foreach ipc facility\nipcs -m -c : lists creator userid and groupid and owner userid and group id.\nipcs -m -p : displays creator id, and process id which accessed the corresponding ipc facility very recently.\n\n
&#x201C; memcached&#xA0;is&#xA0;a&#xA0;high&#xAD;performance,&#xA0;distributed&#xA0;\nmemory&#xA0;object&#xA0;caching&#xA0;system,&#xA0;generic in&#xA0;nature,&#xA0;but\nintended&#xA0;for use in&#xA0;speeding&#xA0;up&#xA0;dynamic&#xA0;web&#xA0;\napplications by alleviating&#xA0;database load&#x201D;.\n&#x25CF; memcached&#xA0;runs on every server in&#xA0;the&#xA0;web&#xA0;server\nfarm.&#xA0;It&#xA0;is&#xA0;CPU lightweight and&#xA0;memory&#xA0;&#x201C; hungry&#x201D;.\n\n\n
Socket pairs provide a way to do bi-directional communication. It uses the socket_* functions in php.\n\nThe messaging functions may be used to send and receive messages to/from other processes. They provide a simple and effective means of exchanging data between processes, without the need for setting up an alternative using Unix domain sockets. \n\nmsg_* underneath semaphore functions in php docs.\n\n
http://www.thegeekstuff.com/2010/08/ipcs-command-examples/\n\nipcs is a UNIX / Linux command, which is used to list the information about the inter-process communication ipcs command provides a report on System V IPCS (Message queue, Semaphore, and Shared memory).\n\nShow:\n\nipcs -a : all facilities\nipcs -q : message queues\nipcs -m : memory segments\nipcs -s : semaphores\nipcs -m -i SHMID : detailed info for memory segment with SHMID\nipcs -l : lists system limits foreach ipc facility\nipcs -m -c : lists creator userid and groupid and owner userid and group id.\nipcs -m -p : displays creator id, and process id which accessed the corresponding ipc facility very recently.\n\n
\n
\n
http://www.thegeekstuff.com/2010/08/ipcs-command-examples/\n\nipcs is a UNIX / Linux command, which is used to list the information about the inter-process communication ipcs command provides a report on System V IPCS (Message queue, Semaphore, and Shared memory).\n\nShow:\n\nipcs -a : all facilities\nipcs -q : message queues\nipcs -m : memory segments\nipcs -s : semaphores\nipcs -m -i SHMID : detailed info for memory segment with SHMID\nipcs -l : lists system limits foreach ipc facility\nipcs -m -c : lists creator userid and groupid and owner userid and group id.\nipcs -m -p : displays creator id, and process id which accessed the corresponding ipc facility very recently.\n\n
http://www.thegeekstuff.com/2010/08/ipcs-command-examples/\n\nipcs is a UNIX / Linux command, which is used to list the information about the inter-process communication ipcs command provides a report on System V IPCS (Message queue, Semaphore, and Shared memory).\n\nShow:\n\nipcs -a : all facilities\nipcs -q : message queues\nipcs -m : memory segments\nipcs -s : semaphores\nipcs -m -i SHMID : detailed info for memory segment with SHMID\nipcs -l : lists system limits foreach ipc facility\nipcs -m -c : lists creator userid and groupid and owner userid and group id.\nipcs -m -p : displays creator id, and process id which accessed the corresponding ipc facility very recently.\n\n
http://www.thegeekstuff.com/2010/08/ipcs-command-examples/\n\nipcs is a UNIX / Linux command, which is used to list the information about the inter-process communication ipcs command provides a report on System V IPCS (Message queue, Semaphore, and Shared memory).\n\nShow:\n\nipcs -a : all \nipcs -q : message queues\nipcs -m : memory segments\nipcs -s : semaphores\nipcs -m -i SHMID : detailed info for memory segment with SHMID\nipcs -l : lists system limits foreach ipc facility\nipcs -m -c : lists creator userid and groupid and owner userid and group id.\nipcs -m -p : displays creator id, and process id which accessed the corresponding ipc facility very recently.\n\n
http://www.thegeekstuff.com/2010/08/ipcs-command-examples/\n\nipcs is a UNIX / Linux command, which is used to list the information about the inter-process communication ipcs command provides a report on System V IPCS (Message queue, Semaphore, and Shared memory).\n\nShow:\n\nipcs -a : all facilities\nipcs -q : message queues\nipcs -m : memory segments\nipcs -s : semaphores\nipcs -m -i SHMID : detailed info for memory segment with SHMID\nipcs -l : lists system limits foreach ipc facility\nipcs -m -c : lists creator userid and groupid and owner userid and group id.\nipcs -m -p : displays creator id, and process id which accessed the corresponding ipc facility very recently.\n\n
\n
http://www.thegeekstuff.com/2010/08/ipcs-command-examples/\n\nipcs is a UNIX / Linux command, which is used to list the information about the inter-process communication ipcs command provides a report on System V IPCS (Message queue, Semaphore, and Shared memory).\n\nShow:\n\nipcs -a : all facilities\nipcs -q : message queues\nipcs -m : memory segments\nipcs -s : semaphores\nipcs -m -i SHMID : detailed info for memory segment with SHMID\nipcs -l : lists system limits foreach ipc facility\nipcs -m -c : lists creator userid and groupid and owner userid and group id.\nipcs -m -p : displays creator id, and process id which accessed the corresponding ipc facility very recently.\n\n
\n
\n
\n
The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec &#x2014; Executes specified program in current process space\n\n
The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec &#x2014; Executes specified program in current process space\n\n
The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec &#x2014; Executes specified program in current process space\n\n
The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec &#x2014; Executes specified program in current process space\n\n
The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec &#x2014; Executes specified program in current process space\n\n
The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec &#x2014; Executes specified program in current process space\n\n
The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec &#x2014; Executes specified program in current process space\n\n
The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec &#x2014; Executes specified program in current process space\n\n
The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec &#x2014; Executes specified program in current process space\n\n
The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec &#x2014; Executes specified program in current process space\n\n
The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec &#x2014; Executes specified program in current process space\n\n
The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec &#x2014; Executes specified program in current process space\n\n
The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec &#x2014; Executes specified program in current process space\n\n
The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec &#x2014; Executes specified program in current process space\n\n
The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec &#x2014; Executes specified program in current process space\n\n
The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec &#x2014; Executes specified program in current process space\n\n
The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec &#x2014; Executes specified program in current process space\n\n
\n
\n
\n
One of the cool things about the file descriptors is that after you have closed them, the system will reattach them to the firstly opened resources.\n\nThere is still little use to connect STDIN, so we point it to read from /dev/null.\n\nOn the other hand we reconnect the STDOUT & STDERR to logfiles. Whenever you echo something to the screen, it will be written to the logfile.\n\n
\n
\n
\n
When a process ends, all of the memory and resources associated with it are deallocated so they can be used by other processes.\nA zombie process or defunct process is a process that has completed execution but still has an entry in the process table. This entry is still needed to allow the parent process to read the child&#x2019;s exit status. The resources are not deallocated until the process is killed.\nThe parent can read the child's exit status by executing a wait system call, at which stage the zombie is removed. It is commonly executed in a SIGCHLD signal handler on the parent (SIGCHLD is received when a child has died).\nIf the parent explicitly ignores SIGCHLD by setting its handler to SIG_IGN, all child exit status information will be discarded and no zombie processes will be left\n
-1 wait for any child process; this is the same behaviour that the wait function exhibits.\n pcntl_waitpid &#x2014; Waits on or returns the status of a forked child\n WNOHANG return immediately if no child has exited.\n pcntl_wifexited &#x2014; Checks if status code represents a normal exit\n
Sometimes you need to do so much work that you could use an extra pair of hands. One of the way to do this is to start a script a number of times, but you loose some form of control. \n
What you really want is a dynamic number of concurrent workers, that are managed by an overseer. This overseer can add new workers when he needs to, distribute work among on the workers, etc...\nSometimes people talk about &#x201C;multi-threading&#x201D; in php, well, this is what they mean. It is NOT multithreading, it&#x2019;s parallel or concurrent processing.\nHow do you do this? You start of by daemonizing your overseer following the steps as we saw before. When that is done, you do another round of forking.\nYou fork of each worker. Important to note is that you don&#x2019;t have to follow the steps we discussed earlier. One of the reasons we did, was that we were unsure of how the process was started. For the workers, we know and we are in full control.\nEspecially don&#x2019;t change the session id, since you want all your workers to be in the same process group.\nLet&#x2019;s have a very quick look at some code...\n
\n
\n
Sometimes you need to do so much work that you could use an extra pair of hands. One of the way to do this is to start a script a number of times, but you loose some form of control. \n
Gearman is a system to farm out work to other machines, dispatching function calls to machines that are better suited to do work, to do work in parallel, to load balance lots of function calls, or to call functions between languages.\n\n&#xD8;MQ is a high-performance asynchronous messaging library aimed to use in scalable distributed or concurrent applications. It provides a message queue.\n\nThe Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using a simple programming model. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-avaiability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-availabile service on top of a cluster of computers, each of which may be prone to failures.\n
Gearman is an application framework to farm out work to other machines, dispatching function calls to machines that are better suited to do work, to do work in parallel, to load balance lots of function calls, or to call functions between languages.\n\n&#xD8;MQ is a high-performance asynchronous messaging library aimed to use in scalable distributed or concurrent applications. It provides a message queue.\n
ZeroMq is a very lightweight messaging system specially designed for high throughput/low latency scenarios like the one you can find in the financial world. Zmq supports many advanced messaging scenarios but contrary to RabbitMQ, you&#x2019;ll have to implement most of them yourself by combining various pieces of the framework (e.g : sockets and devices). Zmq is very flexible but you&#x2019;ll have to study the 80 pages or so of the guide (which I recommend reading for anybody writing distributed system, even if you don&#x2019;t use Zmq) before being able to do anything more complicated that sending messages between 2 peers.\n
RabbitMQ implements a broker architecture, meaning that messages are queued on a central node before being sent to clients. This approach makes RabbitMQ very easy to use and deploy, because advanced scenarios like routing, load balancing or persistent message queuing are supported in just a few lines of code. However, it also makes it less scalable and &#x201C;slower&#x201D; because the central node adds latency and message envelopes are quite big.\n
ActiveMQ is in the middle ground. Like Zmq, it can be deployed with both broker and P2P topologies. Like RabbitMQ, it&#x2019;s easier to implement advanced scenarios but usually at the cost of raw performance. It&#x2019;s the Swiss army knife of messaging\n
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using a simple programming model. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-avaiability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-availabile service on top of a cluster of computers, each of which may be prone to failures.\n
Gearman is a system to farm out work to other machines, dispatching function calls to machines that are better suited to do work, to do work in parallel, to load balance lots of function calls, or to call functions between languages.\n\n&#xD8;MQ is a high-performance asynchronous messaging library aimed to use in scalable distributed or concurrent applications. It provides a message queue.\n\nThe Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using a simple programming model. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-avaiability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-availabile service on top of a cluster of computers, each of which may be prone to failures.\n