Mais conteúdo relacionado Semelhante a Reactive server with netty (20) Mais de Dmitriy Dumanskiy (6) Reactive server with netty11. Why netty?
● Less GC
● Optimized for Linux based OS
● High performance buffers
● Well defined threading model
12. Why netty?
● Less GC
● Optimized for Linux based OS
● High performance buffers
● Well defined threading model
● HTTP, HTTP/2, SPDY, SCTP, TCP,
UDP, UDT, MQTT, etc
15. When to use?
● Performance is critical
● Own protocol
● Full control over network
(so_reuseport, tcp_cork,
tcp_fastopen, tcp_nodelay, etc)
16. When to use?
● Performance is critical
● Own protocol
● Full control over network
● Game engines (agario, slither,
minecraft)
17. When to use?
● Performance is critical
● Own protocol
● Full control over network
● Game engines
● <3 reactive
20. java.nio.channels.Selector
Selector selector = Selector.open();
channel.configureBlocking(false);
SelectionKey key = channel.register(selector, SelectionKey.OP_READ);
while(true) {
selector.select();
Set<SelectionKey> selectedKeys = selector.selectedKeys();
Iterator<SelectionKey> keyIterator = selectedKeys.iterator();
while(keyIterator.hasNext()) {
key = keyIterator.next();
if (key.isReadable()) { ... }
}
}
21. Selector selector = Selector.open(); // creating selector
channel.configureBlocking(false);
SelectionKey key = channel.register(selector, SelectionKey.OP_READ);
while(true) {
selector.select();
Set<SelectionKey> selectedKeys = selector.selectedKeys();
Iterator<SelectionKey> keyIterator = selectedKeys.iterator();
while(keyIterator.hasNext()) {
key = keyIterator.next();
if (key.isReadable()) { ... }
}
}
22. Selector selector = Selector.open();
channel.configureBlocking(false);
//registering channel with selector, listening for READ events only
SelectionKey key = channel.register(selector, SelectionKey.OP_READ);
while(true) {
selector.select();
Set<SelectionKey> selectedKeys = selector.selectedKeys();
Iterator<SelectionKey> keyIterator = selectedKeys.iterator();
while(keyIterator.hasNext()) {
key = keyIterator.next();
if (key.isReadable()) { ... }
}
}
23. Selector selector = Selector.open();
channel.configureBlocking(false);
SelectionKey key = channel.register(selector, SelectionKey.OP_READ);
while(true) {
selector.select(); //blocking until we get some READ events
Set<SelectionKey> selectedKeys = selector.selectedKeys();
Iterator<SelectionKey> keyIterator = selectedKeys.iterator();
while(keyIterator.hasNext()) {
key = keyIterator.next();
if (key.isReadable()) { ... }
}
}
24. Selector selector = Selector.open();
channel.configureBlocking(false);
SelectionKey key = channel.register(selector, SelectionKey.OP_READ);
while(true) {
selector.select();
//now we have channels with some data
Set<SelectionKey> selectedKeys = selector.selectedKeys();
Iterator<SelectionKey> keyIterator = selectedKeys.iterator();
while(keyIterator.hasNext()) {
key = keyIterator.next();
if (key.isReadable()) { ... }
}
}
25. Selector selector = Selector.open();
channel.configureBlocking(false);
SelectionKey key = channel.register(selector, SelectionKey.OP_READ);
while(true) {
selector.select();
Set<SelectionKey> selectedKeys = selector.selectedKeys();
Iterator<SelectionKey> keyIterator = selectedKeys.iterator();
while(keyIterator.hasNext()) {
key = keyIterator.next();
//do something with data
if (key.isReadable()) { key.channel() }
}
}
28. Minimal setup
ServerBootstrap b = new ServerBootstrap();
b.group(
new NioEventLoopGroup(1),
new NioEventLoopGroup()
) .channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer() {...});
ChannelFuture f = b.bind(8080).sync();
f.channel().closeFuture().sync();
29. Minimal setup
ServerBootstrap b = new ServerBootstrap();
b.group(
new NioEventLoopGroup(1), //IO thread
new NioEventLoopGroup()
) .channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer() {...});
ChannelFuture f = b.bind(8080).sync();
f.channel().closeFuture().sync();
30. Minimal setup
ServerBootstrap b = new ServerBootstrap();
b.group(
new NioEventLoopGroup(1),
new NioEventLoopGroup() //worker threads
) .channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer() {...});
ChannelFuture f = b.bind(8080).sync();
f.channel().closeFuture().sync();
31. Minimal setup
ServerBootstrap b = new ServerBootstrap();
b.group(
new NioEventLoopGroup(1),
new NioEventLoopGroup() //worker threads
) .channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer() {...}); //pipeline init
ChannelFuture f = b.bind(8080).sync();
f.channel().closeFuture().sync();
35. ChannelInboundHandler
public interface ChannelInboundHandler extends ChannelHandler {
...
void channelRegistered(ChannelHandlerContext ctx);
void channelActive(ChannelHandlerContext ctx);
void channelRead(ChannelHandlerContext ctx, Object msg);
void userEventTriggered(ChannelHandlerContext ctx, Object evt);
void channelWritabilityChanged(ChannelHandlerContext ctx);
...
}
36. void initChannel(SocketChannel ch) {
ch.pipeline()
.addLast(new MyProtocolDecoder())
.addLast(new MyProtocolEncoder())
.addLast(new MyLogicHandler());
}
Own tcp/ip server
39. void initChannel(SocketChannel ch) {
ch.pipeline()
.addLast(new HttpRequestDecoder())
.addLast(new HttpResponseEncoder())
.addLast(new MyHttpHandler());
}
Http Server
41. void initChannel(SocketChannel ch) {
ch.pipeline()
.addLast(sslCtx.newHandler(ch.alloc()))
.addLast(new HttpServerCodec())
.addLast(new MyHttpHandler());
}
Https Server
42. void initChannel(SocketChannel ch) {
ch.pipeline()
.addLast(sslCtx.newHandler(ch.alloc()))
.addLast(new HttpServerCodec())
.addLast(new HttpContentCompressor())
.addLast(new MyHttpHandler());
}
Https Server + content gzip
45. public void channelRead(Context ctx, Object msg) {
If (msg instanceOf LoginMessage) {
LoginMessage login = (LoginMessage) msg;
if (isSuperAdmin(login)) {
ctx.pipeline().remove(this);
ctx.pipeline().addLast(new SuperAdminHandler());
}
}
Pipeline flow on the fly
46. public void channelRead(Context ctx, Object msg) {
ChannelFuture cf = ctx.writeAndFlush(response);
cf.addListener(new ChannelFutureListener() {
@Override
public void complete(ChannelFuture future) {
future.channel().close();
}
});
}
Pipeline futures
47. @Override
public void channelRead(Context ctx, Object msg) {
ChannelFuture cf = ctx.writeAndFlush(response);
//close connection after message was delivered
cf.addListener(ChannelFutureListener.CLOSE);
}
Pipeline futures
49. public void channelRead(Context ctx, Object msg) {
ChannelFuture cf = session.sendMsgToFriend(msg);
cf.addListener(new ChannelFutureListener() {
@Override
public void complete(ChannelFuture future) {
future.channel().writeAndFlush(“Delivered!”);
}
});
}
Pipeline futures
51. public void channelRead(Context ctx, Object msg) {
if (msg instanceof HttpRequest) {
HttpRequest req = (HttpRequest) msg;
if (req.method() == GET && req.uri().eq(“/users”)) {
Users users = dbManager.userDao.getAllUsers();
ctx.writeAndFlush(new Response(users));
}
}
Pipeline blocking IO
52. public void channelRead(Context ctx, Object msg) {
if (msg instanceof HttpRequest) {
HttpRequest req = (HttpRequest) msg;
if (req.method() == POST && req.uri().eq(“/email”)) {
mailManager.sendEmail();
}
}
Pipeline blocking IO
53. public void channelRead(Context ctx, Object msg) {
if (msg instanceof HttpRequest) {
HttpRequest req = (HttpRequest) msg;
if (req.method() == GET && req.uri().eq(“/property”)) {
String property = fileManager.readProperty();
ctx.writeAndFlush(new Response(property));
}
}
}
Pipeline blocking IO
54. public void channelRead(Context ctx, Object msg) {
...
blockingThreadPool.execute(() -> {
Users users = dbManager.userDao.getAllUsers();
ctx.writeAndFlush(new Response(users));
});
}
Pipeline blocking IO
58. Pipeline blocking IO
● Thread.sleep()
● java.util.concurrent.*
● Intensive operations
● Any blocking IO (files, db, smtp, etc)
59. Pipeline blocking IO
● Thread.sleep()
● java.util.concurrent.*
● Intensive operations
● Any blocking IO (files, db, smtp, etc)
60. @Override
public void channelInactive(Context ctx) {
HardwareState state = getState(ctx.channel());
if (state != null) {
ctx.executor().schedule(
new DelayedPush(state), state.period, SECONDS
);
}
}
EventLoop is Executor!
61. public void channelRead(Context ctx, Object msg) {
if (msg instanceof FullHttpRequest) {
FullHttpRequest request = (FullHttpRequest) msg;
User user = sessionDao.checkCookie(request);
...
}
super.channelRead(ctx, msg);
}
Request state
63. public void channelRead(Context ctx, Object msg) {
if (msg instanceof FullHttpRequest) {
FullHttpRequest request = (FullHttpRequest) msg;
User user = sessionDao.checkCookie(request);
ctx.channel().attr(USER_KEY).set(user);
}
super.channelRead(ctx, msg);
}
Request state
71. Bootstrap b = new Bootstrap();
b.group(new EpollEventLoopGroup());
b.channel(EpollSocketChannel.class);
Native transport
76. Own ByteBuf
● Reference counted
● Pooling by default
● Direct memory by default
● LeakDetector by default
● Reduced branches, range-checks
80. Thread Model
ChannelFuture inCf = ctx.deregister();
inCf.addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture cf) {
targetLoop.register(cf.channel())
.addListener(completeHandler);
}
});
81. Reusing Event Loop
new ServerBootstrap().group(
new EpollEventLoopGroup(1),
new EpollEventLoopGroup()
).bind(80);
82. Reusing Event Loop
EventLoopGroup boss = new EpollEventLoopGroup(1);
EventLoopGroup workers = new EpollEventLoopGroup();
new ServerBootstrap().group(
boss,
workers
).bind(80);
new ServerBootstrap().group(
boss,
workers
).bind(443);
84. Use direct buffers
ByteBuf buf = ctx.alloc().buffer(3);//pool
buf.writeByte(messageId);
buf.writeShort(OK);
ctx.writeAndFlush(buf);
94. Turn off leak detection
ResourceLeakDetector.setLevel(
ResourceLeakDetector.Level.DISABLED);
96. ● Really fast
● Low GC load
● Flexible
● Rapidly evolve
● Cool support
Summary