This book is still in active development.

Introduction

Welcome! This book will teach you everything you need to know about of the networking in laminar. We will discuss important parts of network programming, why we made certain decisions and some explanations about networking concepts in general.

Laminar is free and open source software, distributed under a dual license of MIT and Apache. This means that the engine is given to you at no cost and its source code is completely yours to tinker with. The code is available on GitHub. Contributions and feature requests will always be welcomed!

Motivation

Laminar is fully written in Rust and therefore it has no garbage collector, no data-races, and memory safety. That's why laminar is a good candidate to be a safe and better replacement for other reliable-UDP implementations. This library is written for use in the Amethyst game engine, however, we fully believe that this library can become an excellent reliable UDP implementation in its own right.

Similar Projects

We used some inspiration from other similar projects.

Contributing

We are always happy to welcome new contributors!

If you want to contribute, or have questions, let us know either on GitHub, or on Discord (#net).

Some Important Notices

There are a few important things you need to know in order to use laminar appropriately. If you do not follow these rules, then it is possible that either laminar is not suitable for your use case, and/or it will not work as expected.

  1. Packet Consistency Make sure that the client and the server send messages to each other at a consistent rate, e.g. 30Hz. If you don't do this, the connection may break, and cause the reliability and order aspect of laminar not to work. For more information checkout heartbeat implementation.

  2. Reliability, transferring big data Laminar is not designed for transferring large files. The fragments of the fragmented packet will not be acknowledged. So if a fragment is lost, the whole packet is lost. Although this will be improved in the future, for more information checkout fragmentation and reliability.

  3. DDOS Protection

    DDOS protection ensures that a client that sends something is not simply identified as a trustworthy connection. If this were the case, someone could easily spoof packets and DDOS our server with new connections.

    Make sure the server responds to a message from the client. Only if the server responds, will the connection to the client be tracked.

    In the future we want to have a handshaking process to simplify this process.

Networking protocols

So first and possibly the important one is which protocol to use and when. Let’s first take a look at TCP and UDP.

IP

All communication over the internet is happening over IP (Internet Protocol). This protocol only passes packets across the network without any guarantee that it will arrive at the destination. Sometimes IP passes along multiple copies of the same packet and these packets make their way to the destination via different paths, causing packets to arrive out of order and in duplicate.

So to be able to communicate over the network we make use of existing protocols that provides some more certainty. We will first take a look at TCP where after we checkout UPD.

TCP/IP

TCP stands for “transmission control protocol”. IP stands for “internet protocol”. Together they form the backbone for almost everything you do online, from web browsing to IRC to email, it’s all built on top of TCP/IP.

TCP is a connection-oriented protocol, which means a connection is established and maintained until the application programs at each end have finished exchanging messages. TCP provides full reliable, ordered communication between two machines. The data you send is guaranteed to arrive and in order. The TCP protocol will also split up and reassemble packets if those are too large.

Characteristics

UDP

UDP stands for “user datagram protocol” and it’s another protocol built on top of IP, but unlike TCP, instead of adding lots of features and complexity, UDP is a very thin layer over IP.

Like IP, UDP is an unreliable protocol. In practice however, most packets that are sent will get through, but you’ll usually have around 1-5% packet loss, and occasionally you’ll get periods where no packets get through at all (remember there are lots of computers between you and your destination where things can go wrong…)

Characteristics

Why UDP and not TCP | More

Those of you familiar with TCP know that it already has its own concept of connection, reliability-ordering and congestion avoidance, so why are we rewriting our own mini version of TCP on top of UDP?

The issue is that multilayer action games rely on a steady stream of packets sent at rates of 10 to 30 packets per second, and for the most part, the data contained in these packets is so time sensitive that only the most recent data is useful. This includes data such as player inputs, the position, orientation and velocity of each player character, and the state of physics objects in the world.

The problem with TCP is that it abstracts data delivery as a reliable ordered stream. Because of this, if a packet is lost, TCP has to stop and wait for that packet to be resent. This interrupts the steady stream of packets because more recent packets must wait in a queue until the resent packet arrives, so packets are received in the same order they were sent.

What we need is a different type of reliability.

Instead of having all data treated as a reliable ordered stream, we want to send packets at a steady rate and get notified when packets are received by the other computer. This allows time sensitive data to get through without waiting for resent packets, while letting us make our own decision about how to handle packet loss at the application level.

What TCP does is maintain a sliding window where the ACK sent is the sequence number of the next packet it expects to receive, in order. If TCP does not receive an ACK for a given packet, it stops and re-sends a packet with that sequence number again. This is exactly the behavior we want to avoid!

It is not possible to implement a reliability system with these properties using TCP, so we have no choice but to roll our own reliability on top of UDP. TCP itself is built on UDP.

When use TCP

Of course there could be use-cases for TCP like chat, asset streaming, etc. We can setup a TCP socket for this that is distinct from UDP.

We could also make our UDP channel reliable as described below so when we detect package lost on the client we could construct a new package

Heartbeat

Laminar offers the possibility to keep the connection with a client open. This is done with heartbeat packets. This option is enabled by default. The behavior of the heart beat can be changed in the configuration. It can also be disabled.

A client is considered a connection when it sends a packet. If the client does not send a packet for x seconds, laminar sees this as an idling connection, and it is removed as an active connection. When this happens, the following data is removed:

  1. the reliability data such as acknowledged packets
  2. the buffers that keep track of the ordering/sequencing.
  3. the RTT counter
  4. fragmentation data

Losing this data from the memory is often undesirable. Therefore, it is important to have a consistent flow of packets between the two endpoints which will prevent disconnection of the client. The time before the client is disconnected can be changed in the configuration.

Why a heartbeat?

With game networking for fast-paced FPS games, you have to deal with a lot of data that has to go from point A to B. We are talking about numbers of 20/30/60 hz. Laminar is based and optimized for the situation where a consistent flow of packets from the server to the client and from the client to the server that are being sent. In a game, where everything runs at milliseconds and speed is important, you need fast communication and multiple updates per seconds.

What are those scenarios and how can I know if laminar is useful for my use case? You can think of input synchronization, location updates, state updates, events, etc.
Let's zoom in on input synchronization of an FPS game. The client sends the packages, the server receives it, validates it, and sends an update to all other clients. In an FPS game, a lot of input is shared, and it's not a strange idea for a client to share its input and receive updates 60 times a second.
Laminar is based on this idea, and is optimized for it. When you are sending packets once a second, laminar might not be the best solution here. And your probably going to do fine with TCP.

To add to this, note that clients will be seen as 'disconnected' if they don't send packets for some duration, this duration can be found in the [configuration][config]. When there is a scenario's that you are sending packets less frequent, laminar has the option to keep the connection alive by sending an heath beat message at a configurable interval.

Fragmentation

Fragmentation is dividing large packets into smaller fragments so that it can be sent over the network.

TCP will automatically divide packets into smaller parts if you send large amounts of data. But UDP doesn't support fragmentation out-of-the-box. Fortunately, laminar does.

Fragmentation will be applied to packets larger than the MTU with the following reliability types Reliable Unordered, Reliable Ordered, Reliable Sequenced.

What is this MTU? This stands for 'maximum transmission unit'. On the Internet today (2016, IPv4) the real-world MTU is 1500 bytes. When a packet is larger than 1500 bytes we need to split it up into different fragments. Why 1500? That’s the default MTU for MacOS X and Windows.

You should take note that each fragment will not be acknowledged with our implementation. So if you would send 200.000 bytes (+- 133 fragments) the risk of one fragment being dropped will be huge. If you really want to send large amounts of data over the line go for TCP instead, since that protocol is built for reliability and large data.

When sending small packets with the size of about 4000 bytes (4 fragments) this method will work fine. And won't probably cause any problems. We are planning to support also sending larger packets with acknowledgments.

Laminar's implementation

Laminar fragments your packet if it exceeds the fragment size.

Fragments of a large packet are not yet acknowledged This is a problem if you want to send too large files. If you want to send really large files, I advise you to split up your package and send it in pieces with the option 'reliable ordered'. In the future laminar will be able to send large packets with acknowledgement.

Interesting Reads

Introduction

The internet is a dangerous place, and before you know it your data is gone or your data arrives duplicated because your data is split up along the way to its final destination. In order to have more control over the way in which the data is transported, we have invented protocols.

In this chapter we will consider how laminar gives you more control over the transport of data.

Important

TCP is made for reliability and does this very well. We have been asked many times by people why reliability does not work well or is slow in laminar. Important to know is that laminar has reliability as an option but is not focused on trying to be faster and better than TCP. For fast-paced multiplayer online games, it is not desirable to use TCP because a delay in a packet can have a major impact on all subsequent packets. Reliability, after all, is less important for fast-paced FPS games; UDP. TCP should be used when the need for reliability trumps the need for low latency That said, laminar will support acknowledgement of fragments in the future. Checkout fragmentation for more info.

  • Ordering How can we control the way the data is ordered.
  • Reliability How can we control the arrival of our data.

Introduction

The internet is a dangerous place, and before you know it your data is gone or your data arrives duplicated because your data is split up along the way to its final destination. In order to have more control over the way in which the data is transported, we have invented protocols.

In this chapter we will consider how laminar gives you more control over the transport of data.

Important

TCP is made for reliability and does this very well. We have been asked many times by people why reliability does not work well or is slow in laminar. Important to know is that laminar has reliability as an option but is not focused on trying to be faster and better than TCP. For fast-paced multiplayer online games, it is not desirable to use TCP because a delay in a packet can have a major impact on all subsequent packets. Reliability, after all, is less important for fast-paced FPS games; UDP. TCP should be used when the need for reliability trumps the need for low latency That said, laminar will support acknowledgement of fragments in the future. Checkout fragmentation for more info.

  • Ordering How can we control the way the data is ordered.
  • Reliability How can we control the arrival of our data.

Reliability

So let's talk about reliability. This is a very important concept which could be at first sight difficult but which will be very handy later on.

As you know we have two opposites, TCP on one hand and UDP on the other. TCP has a lot of feature UDP does not have, like shown below.

TCP

  • Guarantee of delivery.
  • Guarantee for order.
  • Packets will not be dropped.
  • Duplication not possible.
  • Automatic fragmentation.

UDP

  • Unreliable.
  • No guarantee for delivery.
  • No guarantee for order.
  • No way of getting the dropped packet.
  • Duplication possible.
  • No fragmentation.

It would be useful if we could somehow specify the features we want on top of UDP. Like that you say: I want the guarantee for my packets to arrive, however they don't need to be in order. Or, I don't care if my packet arrives but I do want to receive only new ones.

Before continuing, it would be helpful to understand the difference between ordering and sequencing: ordering documentation

The 5 Reliability Guarantees

Laminar provides 5 different ways for you to send your data:

Reliability Type Packet Drop Packet Duplication Packet Order Packet Fragmentation Packet Delivery
Unreliable Unordered Any Yes No No No
Unreliable Sequenced Any + old No Sequenced No No
Reliable Unordered No No No Yes Yes
Reliable Ordered No No Ordered Yes Yes
Reliable Sequenced Only old No Sequenced Yes Only newest

Unreliable

Unreliable: Packets can be dropped, duplicated or arrive in any order.

Details

Packet Drop Packet Duplication Packet Order Packet Fragmentation Packet Delivery
Any Yes No No No

Basically just bare UDP. The packet may or may not be delivered.

Unreliable Sequenced

Unreliable Sequenced: Packets can be dropped, but could not be duplicated and arrive in sequence.

Details

Packet Drop Packet Duplication Packet Order Packet Fragmentation Packet Delivery
Any + old No Sequenced No No

Basically just bare UDP, free to be dropped, but has some sequencing to it so that only the newest packets are kept.

Reliable Unordered

Reliable UnOrder: All packets will be sent and received, but without order.

Details

Packet Drop Packet Duplication Packet Order Packet Fragmentation Packet Delivery
No No No Yes Yes

Basically, this is almost TCP without ordering of packets.

Reliable Ordered

Reliable Unordered: All packets will be sent and received, but in the order in which they arrived.

Details

Packet Drop Packet Duplication Packet Order Packet Fragmentation Packet Delivery
No No Ordered Yes Yes

Basically this is almost like TCP.

Reliable Sequenced

Reliable; All packets will be sent and received but arranged in sequence. Which means that only the newest packets will be let through, older packets will be received but they won't get to the user.

Details

Packet Drop Packet Duplication Packet Order Packet Fragmentation Packet Delivery
Only old No Sequenced Yes Only newest

Basically this is almost TCP-like but then sequencing instead of ordering.

Example


# #![allow(unused_variables)]
#fn main() {
use laminar::Packet;

// You can create packets with different reliabilities
let unreliable = Packet::unreliable(destination, bytes);
let reliable = Packet::reliable_unordered(destination, bytes);

// We can specify on which stream and how to order our packets, checkout our book and documentation for more information
let unreliable = Packet::unreliable_sequenced(destination, bytes, Some(1));
let reliable_sequenced = Packet::reliable_sequenced(destination, bytes, Some(2));
let reliable_ordered = Packet::reliable_ordered(destination, bytes, Some(3));
#}

Related

Arranging packets

Laminar provides a way to arrange packets, over different streams.

The above sentence contains a lot of important information, let us zoom in a little more at the above sentence.

Ordering VS Sequencing

Let's define two concepts here: "Sequencing: this is the process of only caring about the newest items." 1 "Ordering: this is the process of putting something in a particular order." 2

  • Sequencing: Only the newest items will be passed trough e.g. 1,3,2,5,4 which results in 1,3,5.
  • Ordering: All items are returned in order 1,3,2,5,4 which results in 1,2,3,4,5.
  • Arranging: We call the process for ordering and sequencing 'arranging' of packets

Due to the design of the internet, it is not always guaranteed that packets will arrive or that they will be received in the order they were sent. Fortunately, Laminar's implementation grants the ability to optionally specify how reliable and ordered (or not) the stream of packets is delivered to the client.

How ordering works.

If we were to send the following packets: 1,2,3,4,5, but something happens on the internet which causes the packets to arrive at their final destination as: 1,5,4,2,3, then Laminar ensures that your packets arrive to the client as 1,2,3,4,5.

Arranging Streams

What are these 'arranging streams'? You can see 'arranging streams' as something to arrange packets that have no relationship at all with one another. You could either arrange packets in order or in sequence.

Simple Example

Think of a highway where you have several lanes where cars are driving. Because there are these lanes, cars can move on faster. For example, the cargo drivers drive on the right and the high-speed cars on the left. The cargo drivers do not influence fast cars and vice versa.

Real Example

If a game developer wants to send data to a client, he might want to send data either ordered, unordered or sequenced.

'Data' could be the following:

  1. Player movement, we want to order player movements because we don't want the player to glitch.
  2. Bullet movement, we want to sequence bullet movement because we don't care about old positions of bullets.
  3. Chat messages, we want to order chat messages because it is nice to see the text in the right order.

Player movement and chat messages are totally unrelated to each other and you absolutely do not want to interrupt the movement packets if a chat message is not sent.

It would be nice if we could order player movements and chat messages separately. Guess what! This is exactly what 'arranging streams' do. A game developer can indicate which stream it likes to arrange the packets. For example, the game developer can say: "Let me order all chat messages to 'stream 1' and sequence all motion packets on 'stream 2'.

Example


# #![allow(unused_variables)]
#fn main() {
// We can specify on which stream and how to order our packets, checkout our book and documentation for more information
let unreliable_sequenced = Packet::unreliable_sequenced(destination, bytes, Some(1));
let reliable_sequenced = Packet::reliable_sequenced(destination, bytes, Some(2));
let reliable_ordered = Packet::reliable_ordered(destination, bytes, Some(3));
#}

Take notice of the last Option parameter, with this parameter you can specify which streams to order your packets on. One thing that is important to understand is that 'sequenced streams' are different from 'ordered streams', thus specifying Some(1) for a sequence stream and Some(1) for an ordered stream will be arranged separately from one another. You can use 254 different ordering or sequencing streams, in reality you'd probably only need a few. When specifying None, stream '255' will be used.

Interesting Reads

Congestion Avoidance

So let's start at what this congestion avoidance is if we send just packets without caring about the internet speed of the client we can flood the network. Since the router tries to deliver all packages it buffers up all packets in the cache. We do not want the router to buffer up packets instead it should drop them. We need to try to avoid sending too much bandwidth in the first place, and then if we detect congestion, we attempt to back off and send even less.

There are a few methods we can implement to defeat congestion.

  1. With RTT
  2. With packet loss [TODO]

Unfortunately, congestion avoidance has not yet been implemented for laminar.

Round Trip Time (RTT)

The time between you sending the packet and you receiving an acknowledgment from the other side is called RTT. To avoid congestion we first need to find a way to calculate the RTT value of our connection so we can decide on top of that value if we have bad or good internet speeds.

Smoothing factor

So you could say: "very simple, measure the time between sending and receiving you got the RTT and you're done right?" No! This is because a packet can travel any path over the internet the RTT can always defer every time you calculate it. And imagine a short internet lag we will directly get a huge RTT back. So we need to smooth out that RTT factor by some amount. Gaffer says that 10% of the RTT will be just fine. With this smoothed RTT we will be able to add it to our current RTT.

Allowed RTT value

So now we have the smoothed RTT and our current RTT, GREAT! But RTT on its own is not bad. So there may be some max allowed RTT. We need to subtract that amount from our measured RTT multiplied by the smoothing factor.

The formula would look like the following:

// rtt_max_value is in ms
// rtt_smoothing_factor is in %
let new_rtt_value = (rtt - rtt_max_value) * rtt_smoothing_factor.

Lets look at an example with numbers. The RTT values are in milliseconds.

bad internet

// this will result into: 5
let new_rtt_value = (300 - 250) * 0.10.

good internet

// this will result into: -15
let new_rtt_value = (100 - 250) * 0.10.

As you see when our calculation is under 250ms we get a negative result, which is in this case positive. When our calculation is above 250ms it will be positive, which is in this case negative.

So each time we receive an acknowledgment we can add our result, of the above formula, to the RTT time saved in the connection.

Interesting Reads