12. Reactor pattern
“Event handling pattern for handling service
requests delivered concurrently to a service
handler by one or more inputs. The service
handler then demultiplexes the incoming
requests and dispatches them synchronously
to the associated request handlers.”
http://en.wikipedia.org/wiki/Reactor_pattern
13. Http Keyboard Udp etc
Service handler
Demultiplexer
Event dispatcher
Event Handler A
Event Handler B
Event Handler C
Event Handler D
14. Http Keyboard Udp etc
Service handler
Demultiplexer
Event dispatcher
Event Handler A
Event Handler B
Event Handler C
Event Handler D
15. Http Keyboard Udp etc
Service handler
Demultiplexer
Event dispatcher
Thread 1
Event Handler A Thread 2
Event Handler B
...
Event Handler C
Event Handler D Thread 20
23. class EchoServer < EM::Connection
def post_init
puts "New connection"
end
def unbind
puts "Connection closed"
end
def receive_data(data)
send_data ">> #{data}"
end
end
EM.run do
EM.start_server('127.0.0.1', 9000, EchoServer)
puts "Started server at 127.0.0.1:9000"
end
24. class EchoServer < EM::Connection
def post_init
puts "New connection"
end
def unbind
puts "Connection closed" # $ telnet localhost 9000
end # Hello
# >> Hello
def receive_data(data) # Bye
send_data ">> #{data}" # >> Bye
end
end
EM.run do
EM.start_server('127.0.0.1', 9000, EchoServer)
puts "Started server at 127.0.0.1:9000"
end
25. # TCP
EM.run do
EM.start_server('127.0.0.1', 9000, EchoServer)
end
# UDP
EM.run do
EM.open_datagram_socket('127.0.0.1', 9000, EchoServer)
end
# Unix-domain server
EM.run do
EM.start_unix_domain_server('/tmp/sock', nil, EchoServer)
end
26. class EchoClient < EM::Connection
def post_init
puts "Sending stuff to server"
send_data("Why, hello there!")
end
def unbind
puts "Connection closed"
end
def receive_data(data)
puts ">> #{data}"
end
end
EM.run do
EM.connect('127.0.0.1', 9000, EchoClient)
end
27. class EchoClient < EM::Connection
def post_init
puts "Sending stuff to server"
send_data("Why, hello there!")
end
def unbind
puts "Connection closed"
end
def receive_data(data)
puts ">> #{data}"
end
end
EM.run do
EM.connect('127.0.0.1', 9000, EchoClient)
end
37. EM.run do
get_stuff = Proc.new do
# ...
long_running_io()
end
use_stuff = Proc.new do |io_results|
# ...
end
# ...
EM.defer( get_stuff, use_stuff )
end
42. EM.run do
channel = EM::Channel.new
EM.defer do
channel.subscribe do |msg|
puts "Received #{msg}"
end
end
EM.add_periodic_timer(1) do
channel << Time.now
end
end
44. class LoanRequest
include EM::Deferrable
def initialize( name )
@name = name
callback do |who|
puts "Approved #{who}!"
end
errback do |who|
puts "Denied #{who}!"
end
end
def approved!
# succeed( *args )
set_deferred_status(:succeeded, @name)
end
def denied!
# fail( *args )
set_deferred_status(:failed, @name)
end
end
45. class LoanRequest
include EM::Deferrable
def initialize( name )
@name = name
callback do |who| EM.run do
puts "Approved #{who}!" s1 = LoanRequest.new('Marc')
end s1.approved!
errback do |who| s2 = LoanRequest.new('Chris')
puts "Denied #{who}!" EM.add_timer(2){ s2.denied! }
end end
end
def approved!
# succeed( *args )
set_deferred_status(:succeeded, @name)
end
def denied!
# fail( *args )
set_deferred_status(:failed, @name)
end
end
46. class LoanRequest
include EM::Deferrable
def initialize( name )
@name = name
callback do |who| EM.run do
puts "Approved #{who}!" s1 = LoanRequest.new('Marc')
end s1.approved!
errback do |who| s2 = LoanRequest.new('Chris')
puts "Denied #{who}!" EM.add_timer(2){ s2.denied! }
end end
end
def approved!
# succeed( *args )
set_deferred_status(:succeeded, @name)
end # :00 Approved Marc!
# :02 Denied Chris!
def denied!
# fail( *args )
set_deferred_status(:failed, @name)
end
end
56. class Mailer
include EM::Deferrable
def add_mailing(val)
callback do
sleep 1
puts "Sent #{val}"
end
end
def connection_open!
puts 'Open connection'
succeed
end
def connection_lost!
puts 'Lost connection'
set_deferred_status nil
end
end
57. EM.run do
m = Mailer.new
class Mailer
m.add_mailing(1)
include EM::Deferrable
m.add_mailing(2)
m.connection_open!
def add_mailing(val)
callback do
EM.add_timer(1) do
sleep 1
m.connection_lost!
puts "Sent #{val}"
EM.add_timer(2) do
end
m.add_mailing(3)
end
m.add_mailing(4)
m.connection_open!
def connection_open!
end
puts 'Open connection'
end
succeed
end
end
def connection_lost!
puts 'Lost connection'
set_deferred_status nil
end
end
58. EM.run do
m = Mailer.new
class Mailer
m.add_mailing(1)
include EM::Deferrable
m.add_mailing(2)
m.connection_open!
def add_mailing(val)
callback do
EM.add_timer(1) do
sleep 1
m.connection_lost!
puts "Sent #{val}"
EM.add_timer(2) do
end
m.add_mailing(3)
end
m.add_mailing(4)
m.connection_open!
def connection_open!
end
puts 'Open connection'
end
succeed
end
end
def connection_lost! # Open connection
puts 'Lost connection' # Sent 1
set_deferred_status nil # Sent 2
end # Lost connection
end # Open connection
# Sent 3
# Sent 4
59. Gotchas
• Inverted flow of control can make
debugging difficult
• Synchronous code will slow it down
• Use/Write libraries for EM
ScaleConf: Doing anything at scale requires better decisions about the tools you use.\nJust because the fit seemed ok at first, when things get rolling you really want to have the right kind of hammer.\n
... and to achieve dramatic results, you need to orchestrate a specialized set of components\n\ncomplexity is a tradeoff based on the domain of your problem.\n
\n
\n
c10k problem\n
c++ reactor: mri, yarv, Rubinius\n\njava reactor for, um, java\n
used to sequential code\nevented code stores some block...\nand executes at some later stage\n
used to sequential code\nevented code stores some block...\nand executes at some later stage\n
\n
Input is received concurrently\nEvent dispatch is synchronous\nThe reactor design pattern is an event handling pattern for handling service requests delivered concurrently to a service handler by one or more inputs. The service handler then demultiplexes the incoming requests and dispatches them synchronously to the associated request handlers\n\n
Input is received concurrently\nEvent dispatch is synchronous\nThe reactor design pattern is an event handling pattern for handling service requests delivered concurrently to a service handler by one or more inputs. The service handler then demultiplexes the incoming requests and dispatches them synchronously to the associated request handlers\n\n
Input is received concurrently\nEvent dispatch is synchronous\nThe reactor design pattern is an event handling pattern for handling service requests delivered concurrently to a service handler by one or more inputs. The service handler then demultiplexes the incoming requests and dispatches them synchronously to the associated request handlers\n\n
Input is received concurrently\nEvent dispatch is synchronous\nThe reactor design pattern is an event handling pattern for handling service requests delivered concurrently to a service handler by one or more inputs. The service handler then demultiplexes the incoming requests and dispatches them synchronously to the associated request handlers\n\n
Input is received concurrently\nEvent dispatch is synchronous\nThe reactor design pattern is an event handling pattern for handling service requests delivered concurrently to a service handler by one or more inputs. The service handler then demultiplexes the incoming requests and dispatches them synchronously to the associated request handlers\n\n
Input is received concurrently\nEvent dispatch is synchronous\nThe reactor design pattern is an event handling pattern for handling service requests delivered concurrently to a service handler by one or more inputs. The service handler then demultiplexes the incoming requests and dispatches them synchronously to the associated request handlers\n\n
\n
Limitations ito select/epoll and the number of open file descriptors\n\n&#x201C;First, you need to tell EventMachine to use epoll instead of select. Second, you need to prepare your program to use more than 1024 descriptors, an operation that generally requires superuser privileges. Third, you will probably want your process to drop the superuser privileges after you increase your process&#x2019;s descriptor limit.&#x201D; - See: http://eventmachine.rubyforge.org/docs/EPOLL.html\n
connection -> servers and clients and shit\nnext_tick -> run code at the next opportunity (always run in main thread)\ndefer -> defer work to run on a thread (green) - 20 by default\nQueue -> data\nChannel -> comms\n
next_tick -> run code at the next opportunity (always run in main thread)\ndefer -> defer work to run on a thread (green) - 20 by default\nQueue -> data\nChannel -> comms\n
Main reactor is single threaded - similar to while reactor_running?; ... end\nEM.run takes over the process...\nit&#x2019;s blocking\n\n
... anything that blocks the main reactor is a no-no\n\nAnything that takes more than a few millisecond...\nrun on separate thread or broken into smaller blocks...\nand run on next_tick\n
Used for creating clients and servers\n
EM interchangeable with EventMachine\nreceive_data is unbuffered\nthese methods are the only ones that will be called by the event loop\n\ncan have a module, and it&#x2019;s behaviour will be mixed into an EM:: Connection\n
\n
EM interchangeable with EventMachine\nreceive_data is unbuffered\n
Schedules work to happen on the main thread\non the next iteration of the reactor\n
Next tick is a tool for bringing data/work back into the run loop\n\nUm, wtf for? You may find yourself asking...\n
Tasks are not broken up into smaller pieces\n... takes really long to complete task 2 and 3\nDo very little work in the main reactor\n
Tasks are not broken up into smaller pieces\n... takes really long to complete task 2 and 3\nDo very little work in the main reactor\n
Tasks are not broken up into smaller pieces\n... takes really long to complete task 2 and 3\nDo very little work in the main reactor\n
Tasks are not broken up into smaller pieces\n... takes really long to complete task 2 and 3\nDo very little work in the main reactor\n
Tasks are not broken up into smaller pieces\n... takes really long to complete task 2 and 3\nDo very little work in the main reactor\n
Tasks are not broken up into smaller pieces\n... takes really long to complete task 2 and 3\nDo very little work in the main reactor\n
Tasks are not broken up into smaller pieces\n... takes really long to complete task 2 and 3\nDo very little work in the main reactor\n
Tasks are not broken up into smaller pieces\n... takes really long to complete task 2 and 3\nDo very little work in the main reactor\n
Tasks are not broken up into smaller pieces\n... takes really long to complete task 2 and 3\nDo very little work in the main reactor\n
Tasks are not broken up into smaller pieces\n... takes really long to complete task 2 and 3\nDo very little work in the main reactor\n
Tasks are not broken up into smaller pieces\n... takes really long to complete task 2 and 3\nDo very little work in the main reactor\n
Tasks are not broken up into smaller pieces\n... takes really long to complete task 2 and 3\nDo very little work in the main reactor\n
Tasks are not broken up into smaller pieces\n... takes really long to complete task 2 and 3\nDo very little work in the main reactor\n
Tasks are not broken up into smaller pieces\n... takes really long to complete task 2 and 3\nDo very little work in the main reactor\n
Tasks are not broken up into smaller pieces\n... takes really long to complete task 2 and 3\nDo very little work in the main reactor\n
Tasks are not broken up into smaller pieces\n... takes really long to complete task 2 and 3\nDo very little work in the main reactor\n
schedules work to take place on a different thread\n
defer runs on thread in thread pool (20 by default)\n
defer allows work to be done on one of the thread pool threads\n
defer allows work to be done on one of the thread pool threads\n
defer allows work to be done on one of the thread pool threads\n
defer allows work to be done on one of the thread pool threads\n
get_stuff run in separate thread,\nData is brought back to the main thread\nand passed on to the callback\nCallback executes on the main thread\nKinda like a future\n
Ordered message queue\nThread safe\n
Popped data is brought back to the main thread\npush/pop scheduled next iteration of the main reactor thread\n
Infinite processing - always do work if there is some available\ndata pops off the queue only when data is available (no blocking)\n\n
next_tick -> run code at the next opportunity (always run in main thread)\ndefer -> defer work to run on a thread (green) - 20 by default\nQueue -> data\nChannel -> comms\n